Test Report: QEMU_macOS 18585

                    
                      649852bcd007960ac9edddddae8235c4914b1566:2024-04-08:33941
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.58
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.92
36 TestAddons/Setup 10.18
37 TestCertOptions 10.15
38 TestCertExpiration 195.43
39 TestDockerFlags 10.27
40 TestForceSystemdFlag 9.98
41 TestForceSystemdEnv 10.02
47 TestErrorSpam/setup 9.83
56 TestFunctional/serial/StartWithProxy 9.84
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.13
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.05
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 119.27
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.62
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.1
150 TestMultiControlPlane/serial/StartCluster 10.2
151 TestMultiControlPlane/serial/DeployApp 110.12
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 57.23
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.48
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 3.45
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.12
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.88
174 TestJSONOutput/start/Command 9.81
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.49
206 TestMountStart/serial/StartWithMountFirst 9.98
209 TestMultiNode/serial/FreshStart2Nodes 9.96
210 TestMultiNode/serial/DeployApp2Nodes 95.23
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.1
215 TestMultiNode/serial/CopyFile 0.07
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 45.75
218 TestMultiNode/serial/RestartKeepsNodes 8.63
219 TestMultiNode/serial/DeleteNode 0.12
220 TestMultiNode/serial/StopMultiNode 3.79
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.47
226 TestPreload 10.05
228 TestScheduledStopUnix 10.04
229 TestSkaffold 12.13
232 TestRunningBinaryUpgrade 588.98
234 TestKubernetesUpgrade 17.72
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.63
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.25
250 TestStoppedBinaryUpgrade/Upgrade 573.41
252 TestPause/serial/Start 9.88
262 TestNoKubernetes/serial/StartWithK8s 9.89
263 TestNoKubernetes/serial/StartWithStopK8s 5.31
264 TestNoKubernetes/serial/Start 5.32
268 TestNoKubernetes/serial/StartNoArgs 5.33
270 TestNetworkPlugins/group/auto/Start 9.76
271 TestNetworkPlugins/group/kindnet/Start 9.85
272 TestNetworkPlugins/group/calico/Start 9.89
273 TestNetworkPlugins/group/custom-flannel/Start 9.83
274 TestNetworkPlugins/group/false/Start 9.86
275 TestNetworkPlugins/group/enable-default-cni/Start 9.81
276 TestNetworkPlugins/group/flannel/Start 9.8
277 TestNetworkPlugins/group/bridge/Start 9.78
278 TestNetworkPlugins/group/kubenet/Start 9.92
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.85
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.12
292 TestStartStop/group/no-preload/serial/FirstStart 9.98
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 7.22
299 TestStartStop/group/embed-certs/serial/FirstStart 10.16
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/no-preload/serial/Pause 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
310 TestStartStop/group/embed-certs/serial/SecondStart 6.29
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/FirstStart 9.85
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (28.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (28.577870041s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90123099-dae1-4fe3-a006-ab0e3b94e646","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-557000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19e9ea5c-97c8-4de7-8a3d-7ae3ffb7c5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18585"}}
	{"specversion":"1.0","id":"b0a36f44-2a8a-4a85-a591-22f621d25897","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig"}}
	{"specversion":"1.0","id":"0cf9fbf4-743f-4d0d-9380-c3426395c8a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5a191710-1a6d-4dc4-bb5f-96ed979e2293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e97da22a-fa1c-4a57-91ca-9a1cf7e62eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube"}}
	{"specversion":"1.0","id":"7fa46b25-450c-4138-8a5e-73fde419dcf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"76eac19c-cba6-44d8-8ccd-73b5b2d28879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c44c68a-2b1f-4780-bc35-a0bbf5b788fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e1cd326d-53bc-48d4-b238-bd2497813b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4b7933d-d546-4831-9078-9955afd84e01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-557000\" primary control-plane node in \"download-only-557000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ba92c79-0ee8-4861-946a-813f2931cf47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"72115e09-803b-40c4-a248-09b8ee0ecd7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108477260 0x108477260 0x108477260 0x108477260 0x108477260 0x108477260 0x108477260] Decompressors:map[bz2:0x1400000f160 gz:0x1400000f168 tar:0x1400000f0f0 tar.bz2:0x1400000f110 tar.gz:0x1400000f120 tar.xz:0x1400000f130 tar.zst:0x1400000f140 tbz2:0x1400000f110 tgz:0x14
00000f120 txz:0x1400000f130 tzst:0x1400000f140 xz:0x1400000f170 zip:0x1400000f190 zst:0x1400000f178] Getters:map[file:0x14002188560 http:0x14000886320 https:0x14000886370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"09479601-2add-4672-918d-7e8a2ae71bbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:35:31.200047    7045 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:35:31.200220    7045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:35:31.200223    7045 out.go:304] Setting ErrFile to fd 2...
	I0408 10:35:31.200225    7045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:35:31.200357    7045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	W0408 10:35:31.200437    7045 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18585-6624/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18585-6624/.minikube/config/config.json: no such file or directory
	I0408 10:35:31.201741    7045 out.go:298] Setting JSON to true
	I0408 10:35:31.220355    7045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5701,"bootTime":1712592030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:35:31.220419    7045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:35:31.226422    7045 out.go:97] [download-only-557000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:35:31.230433    7045 out.go:169] MINIKUBE_LOCATION=18585
	I0408 10:35:31.226546    7045 notify.go:220] Checking for updates...
	W0408 10:35:31.226573    7045 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 10:35:31.238416    7045 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:35:31.242104    7045 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:35:31.245521    7045 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:35:31.248467    7045 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	W0408 10:35:31.255452    7045 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 10:35:31.255638    7045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:35:31.259220    7045 out.go:97] Using the qemu2 driver based on user configuration
	I0408 10:35:31.259228    7045 start.go:297] selected driver: qemu2
	I0408 10:35:31.259244    7045 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:35:31.259328    7045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:35:31.262801    7045 out.go:169] Automatically selected the socket_vmnet network
	I0408 10:35:31.269321    7045 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 10:35:31.269425    7045 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:35:31.269506    7045 cni.go:84] Creating CNI manager for ""
	I0408 10:35:31.269525    7045 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 10:35:31.269570    7045 start.go:340] cluster config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:35:31.275207    7045 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:35:31.279007    7045 out.go:97] Downloading VM boot image ...
	I0408 10:35:31.279033    7045 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso
	I0408 10:35:40.437397    7045 out.go:97] Starting "download-only-557000" primary control-plane node in "download-only-557000" cluster
	I0408 10:35:40.437415    7045 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:35:40.501666    7045 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 10:35:40.501685    7045 cache.go:56] Caching tarball of preloaded images
	I0408 10:35:40.501883    7045 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:35:40.506076    7045 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 10:35:40.506085    7045 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:35:40.587245    7045 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 10:35:58.305892    7045 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:35:58.306101    7045 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:35:59.003872    7045 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 10:35:59.004072    7045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/download-only-557000/config.json ...
	I0408 10:35:59.004098    7045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/download-only-557000/config.json: {Name:mkf18c9815c3e0af2ad0f2abf2eb9a78416f266f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:35:59.004357    7045 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:35:59.004545    7045 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0408 10:35:59.693916    7045 out.go:169] 
	W0408 10:35:59.704010    7045 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108477260 0x108477260 0x108477260 0x108477260 0x108477260 0x108477260 0x108477260] Decompressors:map[bz2:0x1400000f160 gz:0x1400000f168 tar:0x1400000f0f0 tar.bz2:0x1400000f110 tar.gz:0x1400000f120 tar.xz:0x1400000f130 tar.zst:0x1400000f140 tbz2:0x1400000f110 tgz:0x1400000f120 txz:0x1400000f130 tzst:0x1400000f140 xz:0x1400000f170 zip:0x1400000f190 zst:0x1400000f178] Getters:map[file:0x14002188560 http:0x14000886320 https:0x14000886370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0408 10:35:59.704049    7045 out_reason.go:110] 
	W0408 10:35:59.712929    7045 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:35:59.715907    7045 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-557000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (28.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-479000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-479000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.746508458s)

                                                
                                                
-- stdout --
	* [offline-docker-479000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-479000" primary control-plane node in "offline-docker-479000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-479000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:48:02.153767    8610 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:48:02.153940    8610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:02.153943    8610 out.go:304] Setting ErrFile to fd 2...
	I0408 10:48:02.153945    8610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:02.154081    8610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:48:02.155291    8610 out.go:298] Setting JSON to false
	I0408 10:48:02.173052    8610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6452,"bootTime":1712592030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:48:02.173131    8610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:48:02.178197    8610 out.go:177] * [offline-docker-479000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:48:02.186210    8610 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:48:02.186223    8610 notify.go:220] Checking for updates...
	I0408 10:48:02.193185    8610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:48:02.196153    8610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:48:02.199195    8610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:48:02.202209    8610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:48:02.205101    8610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:48:02.208521    8610 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:48:02.208579    8610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:48:02.212204    8610 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:48:02.219184    8610 start.go:297] selected driver: qemu2
	I0408 10:48:02.219194    8610 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:48:02.219200    8610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:48:02.221307    8610 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:48:02.224157    8610 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:48:02.227247    8610 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:48:02.227288    8610 cni.go:84] Creating CNI manager for ""
	I0408 10:48:02.227296    8610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:48:02.227307    8610 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:48:02.227345    8610 start.go:340] cluster config:
	{Name:offline-docker-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:48:02.231959    8610 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:48:02.239177    8610 out.go:177] * Starting "offline-docker-479000" primary control-plane node in "offline-docker-479000" cluster
	I0408 10:48:02.243129    8610 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:48:02.243175    8610 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:48:02.243182    8610 cache.go:56] Caching tarball of preloaded images
	I0408 10:48:02.243262    8610 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:48:02.243270    8610 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:48:02.243337    8610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/offline-docker-479000/config.json ...
	I0408 10:48:02.243350    8610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/offline-docker-479000/config.json: {Name:mk5c09e6ad191f116a12615a93d5d9ba8494b29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:48:02.243571    8610 start.go:360] acquireMachinesLock for offline-docker-479000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:02.243603    8610 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "offline-docker-479000"
	I0408 10:48:02.243616    8610 start.go:93] Provisioning new machine with config: &{Name:offline-docker-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:02.243645    8610 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:02.248201    8610 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:02.263533    8610 start.go:159] libmachine.API.Create for "offline-docker-479000" (driver="qemu2")
	I0408 10:48:02.263567    8610 client.go:168] LocalClient.Create starting
	I0408 10:48:02.263671    8610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:02.263700    8610 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:02.263710    8610 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:02.263759    8610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:02.263780    8610 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:02.263787    8610 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:02.264141    8610 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:02.409325    8610 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:02.465335    8610 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:02.465345    8610 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:02.465615    8610 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2
	I0408 10:48:02.478926    8610 main.go:141] libmachine: STDOUT: 
	I0408 10:48:02.478956    8610 main.go:141] libmachine: STDERR: 
	I0408 10:48:02.479026    8610 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2 +20000M
	I0408 10:48:02.491388    8610 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:02.491410    8610 main.go:141] libmachine: STDERR: 
	I0408 10:48:02.491425    8610 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2
	I0408 10:48:02.491430    8610 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:02.491464    8610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:d1:72:99:50:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2
	I0408 10:48:02.493168    8610 main.go:141] libmachine: STDOUT: 
	I0408 10:48:02.493184    8610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:02.493210    8610 client.go:171] duration metric: took 229.634792ms to LocalClient.Create
	I0408 10:48:04.494318    8610 start.go:128] duration metric: took 2.250651917s to createHost
	I0408 10:48:04.494336    8610 start.go:83] releasing machines lock for "offline-docker-479000", held for 2.250714s
	W0408 10:48:04.494347    8610 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:04.503173    8610 out.go:177] * Deleting "offline-docker-479000" in qemu2 ...
	W0408 10:48:04.513577    8610 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:04.513586    8610 start.go:728] Will try again in 5 seconds ...
	I0408 10:48:09.515909    8610 start.go:360] acquireMachinesLock for offline-docker-479000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:09.516417    8610 start.go:364] duration metric: took 352.708µs to acquireMachinesLock for "offline-docker-479000"
	I0408 10:48:09.516563    8610 start.go:93] Provisioning new machine with config: &{Name:offline-docker-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:09.516831    8610 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:09.526226    8610 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:09.577049    8610 start.go:159] libmachine.API.Create for "offline-docker-479000" (driver="qemu2")
	I0408 10:48:09.577103    8610 client.go:168] LocalClient.Create starting
	I0408 10:48:09.577214    8610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:09.577270    8610 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:09.577291    8610 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:09.577359    8610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:09.577401    8610 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:09.577413    8610 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:09.577916    8610 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:09.732796    8610 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:09.790447    8610 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:09.790453    8610 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:09.794535    8610 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2
	I0408 10:48:09.806685    8610 main.go:141] libmachine: STDOUT: 
	I0408 10:48:09.806711    8610 main.go:141] libmachine: STDERR: 
	I0408 10:48:09.806775    8610 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2 +20000M
	I0408 10:48:09.817751    8610 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:09.817768    8610 main.go:141] libmachine: STDERR: 
	I0408 10:48:09.817778    8610 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2
	I0408 10:48:09.817783    8610 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:09.817826    8610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:19:cc:f9:65:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/offline-docker-479000/disk.qcow2
	I0408 10:48:09.819545    8610 main.go:141] libmachine: STDOUT: 
	I0408 10:48:09.819560    8610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:09.819575    8610 client.go:171] duration metric: took 242.465417ms to LocalClient.Create
	I0408 10:48:11.821774    8610 start.go:128] duration metric: took 2.30489325s to createHost
	I0408 10:48:11.821917    8610 start.go:83] releasing machines lock for "offline-docker-479000", held for 2.305402625s
	W0408 10:48:11.822330    8610 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:11.838247    8610 out.go:177] 
	W0408 10:48:11.839982    8610 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:48:11.840064    8610 out.go:239] * 
	* 
	W0408 10:48:11.843681    8610 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:48:11.853130    8610 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-479000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-04-08 10:48:11.869134 -0700 PDT m=+760.752482376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-479000 -n offline-docker-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-479000 -n offline-docker-479000: exit status 7 (69.024541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-479000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-479000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-479000
--- FAIL: TestOffline (9.92s)

                                                
                                    
x
+
TestAddons/Setup (10.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-610000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-610000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.177754541s)

                                                
                                                
-- stdout --
	* [addons-610000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-610000" primary control-plane node in "addons-610000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-610000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:36:22.537668    7204 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:36:22.537815    7204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:36:22.537818    7204 out.go:304] Setting ErrFile to fd 2...
	I0408 10:36:22.537821    7204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:36:22.537956    7204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:36:22.539078    7204 out.go:298] Setting JSON to false
	I0408 10:36:22.555133    7204 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5752,"bootTime":1712592030,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:36:22.555202    7204 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:36:22.559895    7204 out.go:177] * [addons-610000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:36:22.565920    7204 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:36:22.565960    7204 notify.go:220] Checking for updates...
	I0408 10:36:22.569884    7204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:36:22.571290    7204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:36:22.574825    7204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:36:22.577911    7204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:36:22.580891    7204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:36:22.584059    7204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:36:22.587895    7204 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:36:22.594830    7204 start.go:297] selected driver: qemu2
	I0408 10:36:22.594838    7204 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:36:22.594844    7204 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:36:22.597208    7204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:36:22.599889    7204 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:36:22.602947    7204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:36:22.602982    7204 cni.go:84] Creating CNI manager for ""
	I0408 10:36:22.602988    7204 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:36:22.602992    7204 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:36:22.603018    7204 start.go:340] cluster config:
	{Name:addons-610000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:36:22.607810    7204 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:36:22.615896    7204 out.go:177] * Starting "addons-610000" primary control-plane node in "addons-610000" cluster
	I0408 10:36:22.618849    7204 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:36:22.618863    7204 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:36:22.618870    7204 cache.go:56] Caching tarball of preloaded images
	I0408 10:36:22.618918    7204 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:36:22.618923    7204 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:36:22.619108    7204 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/addons-610000/config.json ...
	I0408 10:36:22.619121    7204 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/addons-610000/config.json: {Name:mk8067d0d59e581f36ced73696c337027580a6ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:36:22.619342    7204 start.go:360] acquireMachinesLock for addons-610000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:36:22.619525    7204 start.go:364] duration metric: took 177.292µs to acquireMachinesLock for "addons-610000"
	I0408 10:36:22.619536    7204 start.go:93] Provisioning new machine with config: &{Name:addons-610000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:36:22.619574    7204 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:36:22.626804    7204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 10:36:22.644302    7204 start.go:159] libmachine.API.Create for "addons-610000" (driver="qemu2")
	I0408 10:36:22.644324    7204 client.go:168] LocalClient.Create starting
	I0408 10:36:22.644459    7204 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:36:22.714551    7204 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:36:22.766208    7204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:36:23.085615    7204 main.go:141] libmachine: Creating SSH key...
	I0408 10:36:23.222967    7204 main.go:141] libmachine: Creating Disk image...
	I0408 10:36:23.222977    7204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:36:23.223216    7204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2
	I0408 10:36:23.235535    7204 main.go:141] libmachine: STDOUT: 
	I0408 10:36:23.235564    7204 main.go:141] libmachine: STDERR: 
	I0408 10:36:23.235622    7204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2 +20000M
	I0408 10:36:23.246675    7204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:36:23.246690    7204 main.go:141] libmachine: STDERR: 
	I0408 10:36:23.246709    7204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2
	I0408 10:36:23.246712    7204 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:36:23.246742    7204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:28:3a:28:ba:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2
	I0408 10:36:23.248459    7204 main.go:141] libmachine: STDOUT: 
	I0408 10:36:23.248474    7204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:36:23.248496    7204 client.go:171] duration metric: took 604.156583ms to LocalClient.Create
	I0408 10:36:25.250696    7204 start.go:128] duration metric: took 2.631083167s to createHost
	I0408 10:36:25.250789    7204 start.go:83] releasing machines lock for "addons-610000", held for 2.631195667s
	W0408 10:36:25.250858    7204 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:36:25.262246    7204 out.go:177] * Deleting "addons-610000" in qemu2 ...
	W0408 10:36:25.299120    7204 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:36:25.299155    7204 start.go:728] Will try again in 5 seconds ...
	I0408 10:36:30.301369    7204 start.go:360] acquireMachinesLock for addons-610000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:36:30.301747    7204 start.go:364] duration metric: took 286.709µs to acquireMachinesLock for "addons-610000"
	I0408 10:36:30.301861    7204 start.go:93] Provisioning new machine with config: &{Name:addons-610000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:36:30.302135    7204 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:36:30.320639    7204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 10:36:30.368986    7204 start.go:159] libmachine.API.Create for "addons-610000" (driver="qemu2")
	I0408 10:36:30.369035    7204 client.go:168] LocalClient.Create starting
	I0408 10:36:30.369157    7204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:36:30.369214    7204 main.go:141] libmachine: Decoding PEM data...
	I0408 10:36:30.369228    7204 main.go:141] libmachine: Parsing certificate...
	I0408 10:36:30.369320    7204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:36:30.369362    7204 main.go:141] libmachine: Decoding PEM data...
	I0408 10:36:30.369376    7204 main.go:141] libmachine: Parsing certificate...
	I0408 10:36:30.369872    7204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:36:30.545339    7204 main.go:141] libmachine: Creating SSH key...
	I0408 10:36:30.611463    7204 main.go:141] libmachine: Creating Disk image...
	I0408 10:36:30.611472    7204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:36:30.611710    7204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2
	I0408 10:36:30.624171    7204 main.go:141] libmachine: STDOUT: 
	I0408 10:36:30.624198    7204 main.go:141] libmachine: STDERR: 
	I0408 10:36:30.624257    7204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2 +20000M
	I0408 10:36:30.634991    7204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:36:30.635017    7204 main.go:141] libmachine: STDERR: 
	I0408 10:36:30.635037    7204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2
	I0408 10:36:30.635041    7204 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:36:30.635083    7204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:85:ae:5d:7d:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/addons-610000/disk.qcow2
	I0408 10:36:30.636835    7204 main.go:141] libmachine: STDOUT: 
	I0408 10:36:30.636861    7204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:36:30.636875    7204 client.go:171] duration metric: took 267.832834ms to LocalClient.Create
	I0408 10:36:32.639222    7204 start.go:128] duration metric: took 2.336933209s to createHost
	I0408 10:36:32.639336    7204 start.go:83] releasing machines lock for "addons-610000", held for 2.337549958s
	W0408 10:36:32.639783    7204 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-610000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-610000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:36:32.647178    7204 out.go:177] 
	W0408 10:36:32.656341    7204 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:36:32.656376    7204 out.go:239] * 
	* 
	W0408 10:36:32.659226    7204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:36:32.670174    7204 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-610000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.18s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-055000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-055000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.855601708s)

                                                
                                                
-- stdout --
	* [cert-options-055000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-055000" primary control-plane node in "cert-options-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-055000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-055000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-055000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.869458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-055000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-055000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-055000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-055000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-055000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-055000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.217625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-055000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-055000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-055000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-055000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-055000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-04-08 10:48:42.343669 -0700 PDT m=+791.226815918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-055000 -n cert-options-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-055000 -n cert-options-055000: exit status 7 (32.233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-055000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-055000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-055000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (195.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-454000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-454000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.066891667s)

                                                
                                                
-- stdout --
	* [cert-expiration-454000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-454000" primary control-plane node in "cert-expiration-454000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-454000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-454000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-454000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-454000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-454000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222015375s)

                                                
                                                
-- stdout --
	* [cert-expiration-454000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-454000" primary control-plane node in "cert-expiration-454000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-454000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-454000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-454000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-454000" primary control-plane node in "cert-expiration-454000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-454000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-04-08 10:51:42.370524 -0700 PDT m=+971.252480584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-454000 -n cert-expiration-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-454000 -n cert-expiration-454000: exit status 7 (38.570583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-454000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-454000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-454000
--- FAIL: TestCertExpiration (195.43s)

                                                
                                    
x
+
TestDockerFlags (10.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-838000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-838000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.009293584s)

                                                
                                                
-- stdout --
	* [docker-flags-838000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-838000" primary control-plane node in "docker-flags-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:48:22.092676    8804 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:48:22.092828    8804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:22.092831    8804 out.go:304] Setting ErrFile to fd 2...
	I0408 10:48:22.092833    8804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:22.092952    8804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:48:22.094004    8804 out.go:298] Setting JSON to false
	I0408 10:48:22.110243    8804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6472,"bootTime":1712592030,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:48:22.110301    8804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:48:22.116742    8804 out.go:177] * [docker-flags-838000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:48:22.123733    8804 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:48:22.123767    8804 notify.go:220] Checking for updates...
	I0408 10:48:22.131704    8804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:48:22.137737    8804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:48:22.140794    8804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:48:22.143757    8804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:48:22.146734    8804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:48:22.150002    8804 config.go:182] Loaded profile config "force-systemd-flag-996000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:48:22.150067    8804 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:48:22.150113    8804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:48:22.154719    8804 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:48:22.161703    8804 start.go:297] selected driver: qemu2
	I0408 10:48:22.161709    8804 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:48:22.161719    8804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:48:22.164085    8804 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:48:22.166704    8804 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:48:22.169822    8804 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0408 10:48:22.169873    8804 cni.go:84] Creating CNI manager for ""
	I0408 10:48:22.169881    8804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:48:22.169885    8804 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:48:22.169934    8804 start.go:340] cluster config:
	{Name:docker-flags-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:48:22.174503    8804 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:48:22.181761    8804 out.go:177] * Starting "docker-flags-838000" primary control-plane node in "docker-flags-838000" cluster
	I0408 10:48:22.185763    8804 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:48:22.185782    8804 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:48:22.185790    8804 cache.go:56] Caching tarball of preloaded images
	I0408 10:48:22.185853    8804 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:48:22.185858    8804 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:48:22.185911    8804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/docker-flags-838000/config.json ...
	I0408 10:48:22.185923    8804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/docker-flags-838000/config.json: {Name:mkfab16203d9be99432876c3ea79f4bb18d8c85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:48:22.186140    8804 start.go:360] acquireMachinesLock for docker-flags-838000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:22.186174    8804 start.go:364] duration metric: took 28.541µs to acquireMachinesLock for "docker-flags-838000"
	I0408 10:48:22.186185    8804 start.go:93] Provisioning new machine with config: &{Name:docker-flags-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:22.186218    8804 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:22.194726    8804 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:22.211915    8804 start.go:159] libmachine.API.Create for "docker-flags-838000" (driver="qemu2")
	I0408 10:48:22.211939    8804 client.go:168] LocalClient.Create starting
	I0408 10:48:22.212009    8804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:22.212045    8804 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:22.212054    8804 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:22.212090    8804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:22.212111    8804 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:22.212119    8804 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:22.212449    8804 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:22.358076    8804 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:22.458412    8804 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:22.458417    8804 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:22.458654    8804 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2
	I0408 10:48:22.471151    8804 main.go:141] libmachine: STDOUT: 
	I0408 10:48:22.471173    8804 main.go:141] libmachine: STDERR: 
	I0408 10:48:22.471221    8804 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2 +20000M
	I0408 10:48:22.481922    8804 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:22.481944    8804 main.go:141] libmachine: STDERR: 
	I0408 10:48:22.481968    8804 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2
	I0408 10:48:22.481973    8804 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:22.482004    8804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:7e:69:11:59:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2
	I0408 10:48:22.483700    8804 main.go:141] libmachine: STDOUT: 
	I0408 10:48:22.483713    8804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:22.483733    8804 client.go:171] duration metric: took 271.785625ms to LocalClient.Create
	I0408 10:48:24.485929    8804 start.go:128] duration metric: took 2.29967475s to createHost
	I0408 10:48:24.485988    8804 start.go:83] releasing machines lock for "docker-flags-838000", held for 2.299790167s
	W0408 10:48:24.486057    8804 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:24.503310    8804 out.go:177] * Deleting "docker-flags-838000" in qemu2 ...
	W0408 10:48:24.539131    8804 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:24.539164    8804 start.go:728] Will try again in 5 seconds ...
	I0408 10:48:29.541446    8804 start.go:360] acquireMachinesLock for docker-flags-838000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:29.603864    8804 start.go:364] duration metric: took 62.209417ms to acquireMachinesLock for "docker-flags-838000"
	I0408 10:48:29.603953    8804 start.go:93] Provisioning new machine with config: &{Name:docker-flags-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:29.604225    8804 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:29.620881    8804 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:29.670481    8804 start.go:159] libmachine.API.Create for "docker-flags-838000" (driver="qemu2")
	I0408 10:48:29.670567    8804 client.go:168] LocalClient.Create starting
	I0408 10:48:29.670756    8804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:29.670829    8804 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:29.670846    8804 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:29.670915    8804 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:29.670975    8804 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:29.670991    8804 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:29.671566    8804 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:29.847142    8804 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:29.991582    8804 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:29.991588    8804 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:29.991825    8804 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2
	I0408 10:48:30.004767    8804 main.go:141] libmachine: STDOUT: 
	I0408 10:48:30.004787    8804 main.go:141] libmachine: STDERR: 
	I0408 10:48:30.004837    8804 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2 +20000M
	I0408 10:48:30.015812    8804 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:30.015831    8804 main.go:141] libmachine: STDERR: 
	I0408 10:48:30.015842    8804 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2
	I0408 10:48:30.015851    8804 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:30.015887    8804 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:56:b6:67:e7:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/docker-flags-838000/disk.qcow2
	I0408 10:48:30.017659    8804 main.go:141] libmachine: STDOUT: 
	I0408 10:48:30.017675    8804 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:30.017687    8804 client.go:171] duration metric: took 347.098292ms to LocalClient.Create
	I0408 10:48:32.019883    8804 start.go:128] duration metric: took 2.41561425s to createHost
	I0408 10:48:32.019931    8804 start.go:83] releasing machines lock for "docker-flags-838000", held for 2.416016417s
	W0408 10:48:32.020298    8804 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:32.037878    8804 out.go:177] 
	W0408 10:48:32.043066    8804 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:48:32.043112    8804 out.go:239] * 
	* 
	W0408 10:48:32.045853    8804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:48:32.057934    8804 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-838000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.655416ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-838000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-838000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-838000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-838000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-838000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-838000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-838000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.771ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-838000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-838000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-838000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-838000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-838000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-838000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-08 10:48:32.198048 -0700 PDT m=+781.081262209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-838000 -n docker-flags-838000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-838000 -n docker-flags-838000: exit status 7 (31.129417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-838000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-838000
--- FAIL: TestDockerFlags (10.27s)

                                                
                                    
x
+
TestForceSystemdFlag (9.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-996000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-996000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.754532125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-996000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-996000" primary control-plane node in "force-systemd-flag-996000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-996000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:48:17.130502    8782 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:48:17.130666    8782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:17.130674    8782 out.go:304] Setting ErrFile to fd 2...
	I0408 10:48:17.130676    8782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:17.130840    8782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:48:17.132224    8782 out.go:298] Setting JSON to false
	I0408 10:48:17.148502    8782 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6467,"bootTime":1712592030,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:48:17.148571    8782 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:48:17.155713    8782 out.go:177] * [force-systemd-flag-996000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:48:17.163711    8782 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:48:17.163712    8782 notify.go:220] Checking for updates...
	I0408 10:48:17.171638    8782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:48:17.174606    8782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:48:17.177614    8782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:48:17.180637    8782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:48:17.182016    8782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:48:17.184927    8782 config.go:182] Loaded profile config "force-systemd-env-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:48:17.184995    8782 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:48:17.185057    8782 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:48:17.189620    8782 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:48:17.194633    8782 start.go:297] selected driver: qemu2
	I0408 10:48:17.194639    8782 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:48:17.194644    8782 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:48:17.196968    8782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:48:17.200620    8782 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:48:17.203703    8782 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:48:17.203746    8782 cni.go:84] Creating CNI manager for ""
	I0408 10:48:17.203755    8782 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:48:17.203759    8782 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:48:17.203793    8782 start.go:340] cluster config:
	{Name:force-systemd-flag-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:48:17.208500    8782 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:48:17.215654    8782 out.go:177] * Starting "force-systemd-flag-996000" primary control-plane node in "force-systemd-flag-996000" cluster
	I0408 10:48:17.219628    8782 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:48:17.219643    8782 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:48:17.219654    8782 cache.go:56] Caching tarball of preloaded images
	I0408 10:48:17.219722    8782 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:48:17.219727    8782 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:48:17.219811    8782 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/force-systemd-flag-996000/config.json ...
	I0408 10:48:17.219828    8782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/force-systemd-flag-996000/config.json: {Name:mk98be417a413f12e9cee29248217f87b21dfb85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:48:17.220051    8782 start.go:360] acquireMachinesLock for force-systemd-flag-996000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:17.220087    8782 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "force-systemd-flag-996000"
	I0408 10:48:17.220101    8782 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:17.220136    8782 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:17.227597    8782 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:17.245515    8782 start.go:159] libmachine.API.Create for "force-systemd-flag-996000" (driver="qemu2")
	I0408 10:48:17.245541    8782 client.go:168] LocalClient.Create starting
	I0408 10:48:17.245602    8782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:17.245637    8782 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:17.245656    8782 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:17.245700    8782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:17.245724    8782 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:17.245731    8782 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:17.246115    8782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:17.391813    8782 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:17.430789    8782 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:17.430796    8782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:17.431023    8782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2
	I0408 10:48:17.443187    8782 main.go:141] libmachine: STDOUT: 
	I0408 10:48:17.443208    8782 main.go:141] libmachine: STDERR: 
	I0408 10:48:17.443253    8782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2 +20000M
	I0408 10:48:17.453709    8782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:17.453734    8782 main.go:141] libmachine: STDERR: 
	I0408 10:48:17.453752    8782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2
	I0408 10:48:17.453757    8782 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:17.453789    8782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:9b:4d:f3:9e:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2
	I0408 10:48:17.455529    8782 main.go:141] libmachine: STDOUT: 
	I0408 10:48:17.455549    8782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:17.455568    8782 client.go:171] duration metric: took 210.019833ms to LocalClient.Create
	I0408 10:48:19.457821    8782 start.go:128] duration metric: took 2.237645083s to createHost
	I0408 10:48:19.457890    8782 start.go:83] releasing machines lock for "force-systemd-flag-996000", held for 2.237777125s
	W0408 10:48:19.457985    8782 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:19.475973    8782 out.go:177] * Deleting "force-systemd-flag-996000" in qemu2 ...
	W0408 10:48:19.503926    8782 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:19.503970    8782 start.go:728] Will try again in 5 seconds ...
	I0408 10:48:24.506200    8782 start.go:360] acquireMachinesLock for force-systemd-flag-996000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:24.506508    8782 start.go:364] duration metric: took 237.333µs to acquireMachinesLock for "force-systemd-flag-996000"
	I0408 10:48:24.506614    8782 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-996000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-996000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:24.506897    8782 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:24.519354    8782 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:24.567215    8782 start.go:159] libmachine.API.Create for "force-systemd-flag-996000" (driver="qemu2")
	I0408 10:48:24.567265    8782 client.go:168] LocalClient.Create starting
	I0408 10:48:24.567429    8782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:24.567512    8782 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:24.567527    8782 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:24.567592    8782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:24.567636    8782 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:24.567648    8782 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:24.568167    8782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:24.733966    8782 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:24.771436    8782 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:24.771441    8782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:24.771941    8782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2
	I0408 10:48:24.784288    8782 main.go:141] libmachine: STDOUT: 
	I0408 10:48:24.784309    8782 main.go:141] libmachine: STDERR: 
	I0408 10:48:24.784366    8782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2 +20000M
	I0408 10:48:24.795093    8782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:24.795113    8782 main.go:141] libmachine: STDERR: 
	I0408 10:48:24.795128    8782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2
	I0408 10:48:24.795131    8782 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:24.795173    8782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:07:94:99:75:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-flag-996000/disk.qcow2
	I0408 10:48:24.796989    8782 main.go:141] libmachine: STDOUT: 
	I0408 10:48:24.797007    8782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:24.797019    8782 client.go:171] duration metric: took 229.747291ms to LocalClient.Create
	I0408 10:48:26.799206    8782 start.go:128] duration metric: took 2.292257583s to createHost
	I0408 10:48:26.799379    8782 start.go:83] releasing machines lock for "force-systemd-flag-996000", held for 2.29273625s
	W0408 10:48:26.799677    8782 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-996000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-996000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:26.817541    8782 out.go:177] 
	W0408 10:48:26.825444    8782 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:48:26.825587    8782 out.go:239] * 
	* 
	W0408 10:48:26.828299    8782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:48:26.840323    8782 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-996000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-996000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-996000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.901792ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-996000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-996000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-996000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-08 10:48:26.936554 -0700 PDT m=+775.819802793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-996000 -n force-systemd-flag-996000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-996000 -n force-systemd-flag-996000: exit status 7 (35.109708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-996000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-996000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-996000
--- FAIL: TestForceSystemdFlag (9.98s)

                                                
                                    
x
+
TestForceSystemdEnv (10.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-117000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-117000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.7981895s)

                                                
                                                
-- stdout --
	* [force-systemd-env-117000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-117000" primary control-plane node in "force-systemd-env-117000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-117000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:48:12.074713    8750 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:48:12.074859    8750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:12.074869    8750 out.go:304] Setting ErrFile to fd 2...
	I0408 10:48:12.074871    8750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:48:12.074990    8750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:48:12.076103    8750 out.go:298] Setting JSON to false
	I0408 10:48:12.092522    8750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6462,"bootTime":1712592030,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:48:12.092602    8750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:48:12.099183    8750 out.go:177] * [force-systemd-env-117000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:48:12.109111    8750 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:48:12.105194    8750 notify.go:220] Checking for updates...
	I0408 10:48:12.117080    8750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:48:12.124047    8750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:48:12.132056    8750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:48:12.140062    8750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:48:12.148074    8750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0408 10:48:12.152429    8750 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:48:12.152475    8750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:48:12.156106    8750 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:48:12.163097    8750 start.go:297] selected driver: qemu2
	I0408 10:48:12.163102    8750 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:48:12.163107    8750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:48:12.165348    8750 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:48:12.170580    8750 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:48:12.175180    8750 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:48:12.175220    8750 cni.go:84] Creating CNI manager for ""
	I0408 10:48:12.175227    8750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:48:12.175231    8750 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:48:12.175256    8750 start.go:340] cluster config:
	{Name:force-systemd-env-117000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:48:12.179751    8750 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:48:12.187111    8750 out.go:177] * Starting "force-systemd-env-117000" primary control-plane node in "force-systemd-env-117000" cluster
	I0408 10:48:12.193156    8750 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:48:12.193172    8750 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:48:12.193180    8750 cache.go:56] Caching tarball of preloaded images
	I0408 10:48:12.193234    8750 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:48:12.193239    8750 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:48:12.193308    8750 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/force-systemd-env-117000/config.json ...
	I0408 10:48:12.193323    8750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/force-systemd-env-117000/config.json: {Name:mkc27c3b49d752ab2539399757e6c5b637235e0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:48:12.193605    8750 start.go:360] acquireMachinesLock for force-systemd-env-117000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:12.193636    8750 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "force-systemd-env-117000"
	I0408 10:48:12.193646    8750 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:12.193673    8750 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:12.202136    8750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:12.218201    8750 start.go:159] libmachine.API.Create for "force-systemd-env-117000" (driver="qemu2")
	I0408 10:48:12.218232    8750 client.go:168] LocalClient.Create starting
	I0408 10:48:12.218286    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:12.218314    8750 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:12.218322    8750 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:12.218358    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:12.218382    8750 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:12.218388    8750 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:12.218680    8750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:12.366082    8750 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:12.449778    8750 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:12.449789    8750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:12.449987    8750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2
	I0408 10:48:12.462253    8750 main.go:141] libmachine: STDOUT: 
	I0408 10:48:12.462275    8750 main.go:141] libmachine: STDERR: 
	I0408 10:48:12.462343    8750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2 +20000M
	I0408 10:48:12.473854    8750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:12.473876    8750 main.go:141] libmachine: STDERR: 
	I0408 10:48:12.473891    8750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2
	I0408 10:48:12.473895    8750 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:12.473935    8750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:79:66:f7:be:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2
	I0408 10:48:12.475873    8750 main.go:141] libmachine: STDOUT: 
	I0408 10:48:12.475889    8750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:12.475912    8750 client.go:171] duration metric: took 257.670083ms to LocalClient.Create
	I0408 10:48:14.478291    8750 start.go:128] duration metric: took 2.284558s to createHost
	I0408 10:48:14.478395    8750 start.go:83] releasing machines lock for "force-systemd-env-117000", held for 2.284734459s
	W0408 10:48:14.478451    8750 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:14.485496    8750 out.go:177] * Deleting "force-systemd-env-117000" in qemu2 ...
	W0408 10:48:14.520508    8750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:14.520535    8750 start.go:728] Will try again in 5 seconds ...
	I0408 10:48:19.522777    8750 start.go:360] acquireMachinesLock for force-systemd-env-117000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:48:19.523014    8750 start.go:364] duration metric: took 182.959µs to acquireMachinesLock for "force-systemd-env-117000"
	I0408 10:48:19.523117    8750 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:48:19.523328    8750 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:48:19.530907    8750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0408 10:48:19.573070    8750 start.go:159] libmachine.API.Create for "force-systemd-env-117000" (driver="qemu2")
	I0408 10:48:19.573124    8750 client.go:168] LocalClient.Create starting
	I0408 10:48:19.573223    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:48:19.573280    8750 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:19.573295    8750 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:19.573361    8750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:48:19.573412    8750 main.go:141] libmachine: Decoding PEM data...
	I0408 10:48:19.573423    8750 main.go:141] libmachine: Parsing certificate...
	I0408 10:48:19.574615    8750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:48:19.736228    8750 main.go:141] libmachine: Creating SSH key...
	I0408 10:48:19.772592    8750 main.go:141] libmachine: Creating Disk image...
	I0408 10:48:19.772598    8750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:48:19.772935    8750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2
	I0408 10:48:19.785097    8750 main.go:141] libmachine: STDOUT: 
	I0408 10:48:19.785126    8750 main.go:141] libmachine: STDERR: 
	I0408 10:48:19.785202    8750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2 +20000M
	I0408 10:48:19.795971    8750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:48:19.795989    8750 main.go:141] libmachine: STDERR: 
	I0408 10:48:19.796008    8750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2
	I0408 10:48:19.796013    8750 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:48:19.796045    8750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:07:de:f3:ff:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/force-systemd-env-117000/disk.qcow2
	I0408 10:48:19.797766    8750 main.go:141] libmachine: STDOUT: 
	I0408 10:48:19.797781    8750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:48:19.797794    8750 client.go:171] duration metric: took 224.662333ms to LocalClient.Create
	I0408 10:48:21.799989    8750 start.go:128] duration metric: took 2.276619292s to createHost
	I0408 10:48:21.800047    8750 start.go:83] releasing machines lock for "force-systemd-env-117000", held for 2.276998334s
	W0408 10:48:21.800429    8750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:48:21.812182    8750 out.go:177] 
	W0408 10:48:21.816113    8750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:48:21.816231    8750 out.go:239] * 
	* 
	W0408 10:48:21.818896    8750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:48:21.827080    8750 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-117000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-117000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-117000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.024792ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-117000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-117000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-117000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-08 10:48:21.923027 -0700 PDT m=+770.806308501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-117000 -n force-systemd-env-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-117000 -n force-systemd-env-117000: exit status 7 (35.603375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-117000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-117000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-117000
--- FAIL: TestForceSystemdEnv (10.02s)

                                                
                                    
x
+
TestErrorSpam/setup (9.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-898000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-898000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 --driver=qemu2 : exit status 80 (9.824841417s)

                                                
                                                
-- stdout --
	* [nospam-898000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-898000" primary control-plane node in "nospam-898000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-898000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-898000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-898000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-898000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-898000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18585
- KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-898000" primary control-plane node in "nospam-898000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-898000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-898000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.83s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-193000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-193000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.768986291s)

                                                
                                                
-- stdout --
	* [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-193000" primary control-plane node in "functional-193000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-193000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-193000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18585
- KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-193000" primary control-plane node in "functional-193000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-193000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51093 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (72.597708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.84s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-193000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-193000 --alsologtostderr -v=8: exit status 80 (5.186902916s)

                                                
                                                
-- stdout --
	* [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-193000" primary control-plane node in "functional-193000" cluster
	* Restarting existing qemu2 VM for "functional-193000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-193000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:37:02.135176    7349 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:37:02.135294    7349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:37:02.135298    7349 out.go:304] Setting ErrFile to fd 2...
	I0408 10:37:02.135300    7349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:37:02.135448    7349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:37:02.136433    7349 out.go:298] Setting JSON to false
	I0408 10:37:02.152607    7349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5792,"bootTime":1712592030,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:37:02.152670    7349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:37:02.157989    7349 out.go:177] * [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:37:02.164902    7349 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:37:02.168905    7349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:37:02.164966    7349 notify.go:220] Checking for updates...
	I0408 10:37:02.171904    7349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:37:02.174860    7349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:37:02.177837    7349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:37:02.180954    7349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:37:02.184168    7349 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:37:02.184227    7349 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:37:02.188860    7349 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:37:02.195853    7349 start.go:297] selected driver: qemu2
	I0408 10:37:02.195859    7349 start.go:901] validating driver "qemu2" against &{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:37:02.195912    7349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:37:02.198231    7349 cni.go:84] Creating CNI manager for ""
	I0408 10:37:02.198252    7349 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:37:02.198303    7349 start.go:340] cluster config:
	{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:37:02.202676    7349 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:37:02.209890    7349 out.go:177] * Starting "functional-193000" primary control-plane node in "functional-193000" cluster
	I0408 10:37:02.213870    7349 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:37:02.213886    7349 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:37:02.213895    7349 cache.go:56] Caching tarball of preloaded images
	I0408 10:37:02.213949    7349 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:37:02.213955    7349 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:37:02.214020    7349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/functional-193000/config.json ...
	I0408 10:37:02.214537    7349 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:37:02.214564    7349 start.go:364] duration metric: took 20.875µs to acquireMachinesLock for "functional-193000"
	I0408 10:37:02.214572    7349 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:37:02.214578    7349 fix.go:54] fixHost starting: 
	I0408 10:37:02.214702    7349 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
	W0408 10:37:02.214711    7349 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:37:02.222864    7349 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
	I0408 10:37:02.226874    7349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
	I0408 10:37:02.228918    7349 main.go:141] libmachine: STDOUT: 
	I0408 10:37:02.228936    7349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:37:02.228961    7349 fix.go:56] duration metric: took 14.3835ms for fixHost
	I0408 10:37:02.228966    7349 start.go:83] releasing machines lock for "functional-193000", held for 14.397ms
	W0408 10:37:02.228971    7349 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:37:02.229008    7349 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:37:02.229012    7349 start.go:728] Will try again in 5 seconds ...
	I0408 10:37:07.229784    7349 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:37:07.230302    7349 start.go:364] duration metric: took 373.458µs to acquireMachinesLock for "functional-193000"
	I0408 10:37:07.230442    7349 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:37:07.230463    7349 fix.go:54] fixHost starting: 
	I0408 10:37:07.231156    7349 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
	W0408 10:37:07.231185    7349 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:37:07.236565    7349 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
	I0408 10:37:07.244744    7349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
	I0408 10:37:07.255016    7349 main.go:141] libmachine: STDOUT: 
	I0408 10:37:07.255081    7349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:37:07.255159    7349 fix.go:56] duration metric: took 24.700625ms for fixHost
	I0408 10:37:07.255176    7349 start.go:83] releasing machines lock for "functional-193000", held for 24.848709ms
	W0408 10:37:07.255316    7349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:37:07.261609    7349 out.go:177] 
	W0408 10:37:07.265546    7349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:37:07.265567    7349 out.go:239] * 
	* 
	W0408 10:37:07.267920    7349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:37:07.276577    7349 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-193000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.188613708s for "functional-193000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (71.527125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.996542ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-193000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.176667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-193000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-193000 get po -A: exit status 1 (26.407292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-193000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-193000\n"*: args "kubectl --context functional-193000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-193000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.679041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl images: exit status 83 (43.6795ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.880625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-193000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.788542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.981958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-193000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 kubectl -- --context functional-193000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 kubectl -- --context functional-193000 get pods: exit status 1 (651.575083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-193000
	* no server found for cluster "functional-193000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-193000 kubectl -- --context functional-193000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (33.737875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-193000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-193000 get pods: exit status 1 (899.830125ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-193000
	* no server found for cluster "functional-193000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-193000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.13775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-193000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-193000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.190471292s)

                                                
                                                
-- stdout --
	* [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-193000" primary control-plane node in "functional-193000" cluster
	* Restarting existing qemu2 VM for "functional-193000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-193000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-193000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.19104975s for "functional-193000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (70.698542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-193000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-193000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.474541ms)

                                                
                                                
** stderr ** 
	error: context "functional-193000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-193000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.574542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 logs: exit status 83 (78.807542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT |                     |
	|         | -p download-only-557000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-557000                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -o=json --download-only                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | -p download-only-702000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-702000                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -o=json --download-only                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | -p download-only-347000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1                                        |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-347000                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-557000                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-702000                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-347000                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | --download-only -p                                                       | binary-mirror-035000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | binary-mirror-035000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:51060                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-035000                                                  | binary-mirror-035000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| addons  | enable dashboard -p                                                      | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | addons-610000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | addons-610000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-610000 --wait=true                                             | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-610000                                                         | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -p nospam-898000 -n=1 --memory=2250 --wait=false                         | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-898000                                                         | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | minikube-local-cache-test:functional-193000                              |                      |         |                |                     |                     |
	| cache   | functional-193000 cache delete                                           | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | minikube-local-cache-test:functional-193000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	| ssh     | functional-193000 ssh sudo                                               | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-193000                                                        | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-193000 ssh                                                    | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-193000 cache reload                                           | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	| ssh     | functional-193000 ssh                                                    | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-193000 kubectl --                                             | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | --context functional-193000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 10:37:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 10:37:12.533208    7432 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:37:12.533350    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:37:12.533352    7432 out.go:304] Setting ErrFile to fd 2...
	I0408 10:37:12.533354    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:37:12.533492    7432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:37:12.534533    7432 out.go:298] Setting JSON to false
	I0408 10:37:12.550860    7432 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5802,"bootTime":1712592030,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:37:12.550935    7432 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:37:12.556824    7432 out.go:177] * [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:37:12.564756    7432 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:37:12.564796    7432 notify.go:220] Checking for updates...
	I0408 10:37:12.572688    7432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:37:12.575740    7432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:37:12.578653    7432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:37:12.581691    7432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:37:12.584761    7432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:37:12.588021    7432 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:37:12.588074    7432 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:37:12.592711    7432 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:37:12.599677    7432 start.go:297] selected driver: qemu2
	I0408 10:37:12.599683    7432 start.go:901] validating driver "qemu2" against &{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:37:12.599747    7432 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:37:12.602110    7432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:37:12.602160    7432 cni.go:84] Creating CNI manager for ""
	I0408 10:37:12.602167    7432 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:37:12.602216    7432 start.go:340] cluster config:
	{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:37:12.606476    7432 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:37:12.613637    7432 out.go:177] * Starting "functional-193000" primary control-plane node in "functional-193000" cluster
	I0408 10:37:12.616713    7432 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:37:12.616726    7432 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:37:12.616736    7432 cache.go:56] Caching tarball of preloaded images
	I0408 10:37:12.616806    7432 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:37:12.616810    7432 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:37:12.616864    7432 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/functional-193000/config.json ...
	I0408 10:37:12.617303    7432 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:37:12.617332    7432 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "functional-193000"
	I0408 10:37:12.617340    7432 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:37:12.617343    7432 fix.go:54] fixHost starting: 
	I0408 10:37:12.617449    7432 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
	W0408 10:37:12.617455    7432 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:37:12.624696    7432 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
	I0408 10:37:12.627789    7432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
	I0408 10:37:12.629826    7432 main.go:141] libmachine: STDOUT: 
	I0408 10:37:12.629843    7432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:37:12.629873    7432 fix.go:56] duration metric: took 12.528541ms for fixHost
	I0408 10:37:12.629875    7432 start.go:83] releasing machines lock for "functional-193000", held for 12.5405ms
	W0408 10:37:12.629882    7432 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:37:12.629913    7432 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:37:12.629918    7432 start.go:728] Will try again in 5 seconds ...
	I0408 10:37:17.632176    7432 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:37:17.632505    7432 start.go:364] duration metric: took 263.625µs to acquireMachinesLock for "functional-193000"
	I0408 10:37:17.632606    7432 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:37:17.632619    7432 fix.go:54] fixHost starting: 
	I0408 10:37:17.633277    7432 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
	W0408 10:37:17.633295    7432 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:37:17.642638    7432 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
	I0408 10:37:17.646841    7432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
	I0408 10:37:17.655923    7432 main.go:141] libmachine: STDOUT: 
	I0408 10:37:17.655974    7432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:37:17.656032    7432 fix.go:56] duration metric: took 23.417625ms for fixHost
	I0408 10:37:17.656044    7432 start.go:83] releasing machines lock for "functional-193000", held for 23.527292ms
	W0408 10:37:17.656218    7432 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:37:17.664594    7432 out.go:177] 
	W0408 10:37:17.668713    7432 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:37:17.668741    7432 out.go:239] * 
	W0408 10:37:17.671217    7432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:37:17.679588    7432 out.go:177] 
	
	
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-193000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT |                     |
|         | -p download-only-557000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-557000                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -o=json --download-only                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | -p download-only-702000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-702000                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -o=json --download-only                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | -p download-only-347000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-rc.1                                        |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-347000                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-557000                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-702000                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-347000                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | --download-only -p                                                       | binary-mirror-035000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | binary-mirror-035000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:51060                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-035000                                                  | binary-mirror-035000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| addons  | enable dashboard -p                                                      | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | addons-610000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | addons-610000                                                            |                      |         |                |                     |                     |
| start   | -p addons-610000 --wait=true                                             | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-610000                                                         | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -p nospam-898000 -n=1 --memory=2250 --wait=false                         | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-898000                                                         | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | minikube-local-cache-test:functional-193000                              |                      |         |                |                     |                     |
| cache   | functional-193000 cache delete                                           | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | minikube-local-cache-test:functional-193000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
| ssh     | functional-193000 ssh sudo                                               | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-193000                                                        | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-193000 ssh                                                    | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-193000 cache reload                                           | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
| ssh     | functional-193000 ssh                                                    | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-193000 kubectl --                                             | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | --context functional-193000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/08 10:37:12
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0408 10:37:12.533208    7432 out.go:291] Setting OutFile to fd 1 ...
I0408 10:37:12.533350    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:37:12.533352    7432 out.go:304] Setting ErrFile to fd 2...
I0408 10:37:12.533354    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:37:12.533492    7432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:37:12.534533    7432 out.go:298] Setting JSON to false
I0408 10:37:12.550860    7432 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5802,"bootTime":1712592030,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0408 10:37:12.550935    7432 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0408 10:37:12.556824    7432 out.go:177] * [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0408 10:37:12.564756    7432 out.go:177]   - MINIKUBE_LOCATION=18585
I0408 10:37:12.564796    7432 notify.go:220] Checking for updates...
I0408 10:37:12.572688    7432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
I0408 10:37:12.575740    7432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0408 10:37:12.578653    7432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 10:37:12.581691    7432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
I0408 10:37:12.584761    7432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0408 10:37:12.588021    7432 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:37:12.588074    7432 driver.go:392] Setting default libvirt URI to qemu:///system
I0408 10:37:12.592711    7432 out.go:177] * Using the qemu2 driver based on existing profile
I0408 10:37:12.599677    7432 start.go:297] selected driver: qemu2
I0408 10:37:12.599683    7432 start.go:901] validating driver "qemu2" against &{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 10:37:12.599747    7432 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 10:37:12.602110    7432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 10:37:12.602160    7432 cni.go:84] Creating CNI manager for ""
I0408 10:37:12.602167    7432 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0408 10:37:12.602216    7432 start.go:340] cluster config:
{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 10:37:12.606476    7432 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 10:37:12.613637    7432 out.go:177] * Starting "functional-193000" primary control-plane node in "functional-193000" cluster
I0408 10:37:12.616713    7432 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0408 10:37:12.616726    7432 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0408 10:37:12.616736    7432 cache.go:56] Caching tarball of preloaded images
I0408 10:37:12.616806    7432 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0408 10:37:12.616810    7432 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0408 10:37:12.616864    7432 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/functional-193000/config.json ...
I0408 10:37:12.617303    7432 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 10:37:12.617332    7432 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "functional-193000"
I0408 10:37:12.617340    7432 start.go:96] Skipping create...Using existing machine configuration
I0408 10:37:12.617343    7432 fix.go:54] fixHost starting: 
I0408 10:37:12.617449    7432 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
W0408 10:37:12.617455    7432 fix.go:138] unexpected machine state, will restart: <nil>
I0408 10:37:12.624696    7432 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
I0408 10:37:12.627789    7432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
I0408 10:37:12.629826    7432 main.go:141] libmachine: STDOUT: 
I0408 10:37:12.629843    7432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 10:37:12.629873    7432 fix.go:56] duration metric: took 12.528541ms for fixHost
I0408 10:37:12.629875    7432 start.go:83] releasing machines lock for "functional-193000", held for 12.5405ms
W0408 10:37:12.629882    7432 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 10:37:12.629913    7432 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 10:37:12.629918    7432 start.go:728] Will try again in 5 seconds ...
I0408 10:37:17.632176    7432 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 10:37:17.632505    7432 start.go:364] duration metric: took 263.625µs to acquireMachinesLock for "functional-193000"
I0408 10:37:17.632606    7432 start.go:96] Skipping create...Using existing machine configuration
I0408 10:37:17.632619    7432 fix.go:54] fixHost starting: 
I0408 10:37:17.633277    7432 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
W0408 10:37:17.633295    7432 fix.go:138] unexpected machine state, will restart: <nil>
I0408 10:37:17.642638    7432 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
I0408 10:37:17.646841    7432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
I0408 10:37:17.655923    7432 main.go:141] libmachine: STDOUT: 
I0408 10:37:17.655974    7432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 10:37:17.656032    7432 fix.go:56] duration metric: took 23.417625ms for fixHost
I0408 10:37:17.656044    7432 start.go:83] releasing machines lock for "functional-193000", held for 23.527292ms
W0408 10:37:17.656218    7432 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 10:37:17.664594    7432 out.go:177] 
W0408 10:37:17.668713    7432 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 10:37:17.668741    7432 out.go:239] * 
W0408 10:37:17.671217    7432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 10:37:17.679588    7432 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3825418224/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT |                     |
|         | -p download-only-557000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-557000                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -o=json --download-only                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | -p download-only-702000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-702000                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -o=json --download-only                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | -p download-only-347000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-rc.1                                        |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-347000                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-557000                                                  | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-702000                                                  | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| delete  | -p download-only-347000                                                  | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | --download-only -p                                                       | binary-mirror-035000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | binary-mirror-035000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:51060                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-035000                                                  | binary-mirror-035000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| addons  | enable dashboard -p                                                      | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | addons-610000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | addons-610000                                                            |                      |         |                |                     |                     |
| start   | -p addons-610000 --wait=true                                             | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-610000                                                         | addons-610000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -p nospam-898000 -n=1 --memory=2250 --wait=false                         | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-898000 --log_dir                                                  | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-898000                                                         | nospam-898000        | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-193000 cache add                                              | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | minikube-local-cache-test:functional-193000                              |                      |         |                |                     |                     |
| cache   | functional-193000 cache delete                                           | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | minikube-local-cache-test:functional-193000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
| ssh     | functional-193000 ssh sudo                                               | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-193000                                                        | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-193000 ssh                                                    | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-193000 cache reload                                           | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
| ssh     | functional-193000 ssh                                                    | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT | 08 Apr 24 10:37 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-193000 kubectl --                                             | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | --context functional-193000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-193000                                                     | functional-193000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:37 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/08 10:37:12
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0408 10:37:12.533208    7432 out.go:291] Setting OutFile to fd 1 ...
I0408 10:37:12.533350    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:37:12.533352    7432 out.go:304] Setting ErrFile to fd 2...
I0408 10:37:12.533354    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:37:12.533492    7432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:37:12.534533    7432 out.go:298] Setting JSON to false
I0408 10:37:12.550860    7432 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5802,"bootTime":1712592030,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0408 10:37:12.550935    7432 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0408 10:37:12.556824    7432 out.go:177] * [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0408 10:37:12.564756    7432 out.go:177]   - MINIKUBE_LOCATION=18585
I0408 10:37:12.564796    7432 notify.go:220] Checking for updates...
I0408 10:37:12.572688    7432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
I0408 10:37:12.575740    7432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0408 10:37:12.578653    7432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 10:37:12.581691    7432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
I0408 10:37:12.584761    7432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0408 10:37:12.588021    7432 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:37:12.588074    7432 driver.go:392] Setting default libvirt URI to qemu:///system
I0408 10:37:12.592711    7432 out.go:177] * Using the qemu2 driver based on existing profile
I0408 10:37:12.599677    7432 start.go:297] selected driver: qemu2
I0408 10:37:12.599683    7432 start.go:901] validating driver "qemu2" against &{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 10:37:12.599747    7432 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 10:37:12.602110    7432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 10:37:12.602160    7432 cni.go:84] Creating CNI manager for ""
I0408 10:37:12.602167    7432 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0408 10:37:12.602216    7432 start.go:340] cluster config:
{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 10:37:12.606476    7432 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 10:37:12.613637    7432 out.go:177] * Starting "functional-193000" primary control-plane node in "functional-193000" cluster
I0408 10:37:12.616713    7432 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0408 10:37:12.616726    7432 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0408 10:37:12.616736    7432 cache.go:56] Caching tarball of preloaded images
I0408 10:37:12.616806    7432 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0408 10:37:12.616810    7432 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0408 10:37:12.616864    7432 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/functional-193000/config.json ...
I0408 10:37:12.617303    7432 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 10:37:12.617332    7432 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "functional-193000"
I0408 10:37:12.617340    7432 start.go:96] Skipping create...Using existing machine configuration
I0408 10:37:12.617343    7432 fix.go:54] fixHost starting: 
I0408 10:37:12.617449    7432 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
W0408 10:37:12.617455    7432 fix.go:138] unexpected machine state, will restart: <nil>
I0408 10:37:12.624696    7432 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
I0408 10:37:12.627789    7432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
I0408 10:37:12.629826    7432 main.go:141] libmachine: STDOUT: 
I0408 10:37:12.629843    7432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 10:37:12.629873    7432 fix.go:56] duration metric: took 12.528541ms for fixHost
I0408 10:37:12.629875    7432 start.go:83] releasing machines lock for "functional-193000", held for 12.5405ms
W0408 10:37:12.629882    7432 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 10:37:12.629913    7432 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 10:37:12.629918    7432 start.go:728] Will try again in 5 seconds ...
I0408 10:37:17.632176    7432 start.go:360] acquireMachinesLock for functional-193000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 10:37:17.632505    7432 start.go:364] duration metric: took 263.625µs to acquireMachinesLock for "functional-193000"
I0408 10:37:17.632606    7432 start.go:96] Skipping create...Using existing machine configuration
I0408 10:37:17.632619    7432 fix.go:54] fixHost starting: 
I0408 10:37:17.633277    7432 fix.go:112] recreateIfNeeded on functional-193000: state=Stopped err=<nil>
W0408 10:37:17.633295    7432 fix.go:138] unexpected machine state, will restart: <nil>
I0408 10:37:17.642638    7432 out.go:177] * Restarting existing qemu2 VM for "functional-193000" ...
I0408 10:37:17.646841    7432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:3e:d4:5d:2a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/functional-193000/disk.qcow2
I0408 10:37:17.655923    7432 main.go:141] libmachine: STDOUT: 
I0408 10:37:17.655974    7432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0408 10:37:17.656032    7432 fix.go:56] duration metric: took 23.417625ms for fixHost
I0408 10:37:17.656044    7432 start.go:83] releasing machines lock for "functional-193000", held for 23.527292ms
W0408 10:37:17.656218    7432 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0408 10:37:17.664594    7432 out.go:177] 
W0408 10:37:17.668713    7432 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0408 10:37:17.668741    7432 out.go:239] * 
W0408 10:37:17.671217    7432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 10:37:17.679588    7432 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-193000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-193000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.232834ms)

                                                
                                                
** stderr ** 
	error: context "functional-193000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-193000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-193000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-193000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-193000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-193000 --alsologtostderr -v=1] stderr:
I0408 10:38:00.420606    7756 out.go:291] Setting OutFile to fd 1 ...
I0408 10:38:00.421007    7756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.421010    7756 out.go:304] Setting ErrFile to fd 2...
I0408 10:38:00.421012    7756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.421158    7756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:38:00.421377    7756 mustload.go:65] Loading cluster: functional-193000
I0408 10:38:00.421573    7756 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:00.425207    7756 out.go:177] * The control-plane node functional-193000 host is not running: state=Stopped
I0408 10:38:00.429023    7756 out.go:177]   To start a cluster, run: "minikube start -p functional-193000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (43.801917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 status: exit status 7 (32.223958ms)

                                                
                                                
-- stdout --
	functional-193000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-193000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.109542ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-193000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 status -o json: exit status 7 (31.911667ms)

                                                
                                                
-- stdout --
	{"Name":"functional-193000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-193000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.37975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-193000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-193000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.902042ms)

                                                
                                                
** stderr ** 
	error: context "functional-193000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-193000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-193000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-193000 describe po hello-node-connect: exit status 1 (26.251375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-193000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-193000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-193000 logs -l app=hello-node-connect: exit status 1 (26.595125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-193000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-193000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-193000 describe svc hello-node-connect: exit status 1 (26.472041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-193000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.121084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-193000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.427625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "echo hello": exit status 83 (45.985916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n"*. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "cat /etc/hostname": exit status 83 (44.802292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-193000"- but got *"* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n"*. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (35.454917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.474958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-193000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.351083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-193000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-193000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cp functional-193000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2185986734/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 cp functional-193000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2185986734/001/cp-test.txt: exit status 83 (43.029125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-193000 cp functional-193000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2185986734/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.920333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2185986734/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (43.929458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-193000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (46.534542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-193000 ssh -n functional-193000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-193000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-193000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7043/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/test/nested/copy/7043/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/test/nested/copy/7043/hosts": exit status 83 (51.580083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/test/nested/copy/7043/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-193000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-193000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.908334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7043.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/7043.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/7043.pem": exit status 83 (44.015417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7043.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo cat /etc/ssl/certs/7043.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7043.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-193000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-193000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7043.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /usr/share/ca-certificates/7043.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /usr/share/ca-certificates/7043.pem": exit status 83 (41.770125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7043.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo cat /usr/share/ca-certificates/7043.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7043.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-193000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-193000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.215416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-193000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-193000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/70432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/70432.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/70432.pem": exit status 83 (41.1965ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/70432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo cat /etc/ssl/certs/70432.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/70432.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-193000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-193000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/70432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /usr/share/ca-certificates/70432.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /usr/share/ca-certificates/70432.pem": exit status 83 (41.670125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/70432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo cat /usr/share/ca-certificates/70432.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/70432.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-193000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-193000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (47.73625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-193000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-193000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (31.974416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-193000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-193000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.656291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-193000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-193000 -n functional-193000: exit status 7 (32.676875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo systemctl is-active crio": exit status 83 (41.247166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 version -o=json --components: exit status 83 (44.965125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-193000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-193000 image ls --format short --alsologtostderr:
I0408 10:38:00.839277    7771 out.go:291] Setting OutFile to fd 1 ...
I0408 10:38:00.839458    7771 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.839461    7771 out.go:304] Setting ErrFile to fd 2...
I0408 10:38:00.839463    7771 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.839572    7771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:38:00.839990    7771 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:00.840048    7771 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-193000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-193000 image ls --format table --alsologtostderr:
I0408 10:38:00.953112    7777 out.go:291] Setting OutFile to fd 1 ...
I0408 10:38:00.953274    7777 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.953277    7777 out.go:304] Setting ErrFile to fd 2...
I0408 10:38:00.953279    7777 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.953405    7777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:38:00.953813    7777 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:00.953884    7777 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-193000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-193000 image ls --format json --alsologtostderr:
I0408 10:38:00.915566    7775 out.go:291] Setting OutFile to fd 1 ...
I0408 10:38:00.915723    7775 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.915726    7775 out.go:304] Setting ErrFile to fd 2...
I0408 10:38:00.915728    7775 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.915850    7775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:38:00.916246    7775 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:00.916309    7775 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-193000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-193000 image ls --format yaml --alsologtostderr:
I0408 10:38:00.877535    7773 out.go:291] Setting OutFile to fd 1 ...
I0408 10:38:00.877693    7773 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.877696    7773 out.go:304] Setting ErrFile to fd 2...
I0408 10:38:00.877699    7773 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:00.877831    7773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:38:00.878235    7773 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:00.878297    7773 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh pgrep buildkitd: exit status 83 (44.740375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image build -t localhost/my-image:functional-193000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-193000 image build -t localhost/my-image:functional-193000 testdata/build --alsologtostderr:
I0408 10:38:01.035241    7781 out.go:291] Setting OutFile to fd 1 ...
I0408 10:38:01.035654    7781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:01.035657    7781 out.go:304] Setting ErrFile to fd 2...
I0408 10:38:01.035660    7781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:38:01.035845    7781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:38:01.036232    7781 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:01.036663    7781 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:38:01.036893    7781 build_images.go:133] succeeded building to: 
I0408 10:38:01.036896    7781 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls
functional_test.go:442: expected "localhost/my-image:functional-193000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-193000 docker-env) && out/minikube-darwin-arm64 status -p functional-193000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-193000 docker-env) && out/minikube-darwin-arm64 status -p functional-193000": exit status 1 (45.955458ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2: exit status 83 (43.989333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:38:00.706458    7765 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:38:00.707454    7765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.707457    7765 out.go:304] Setting ErrFile to fd 2...
	I0408 10:38:00.707459    7765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.707578    7765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:38:00.707770    7765 mustload.go:65] Loading cluster: functional-193000
	I0408 10:38:00.707970    7765 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:38:00.711700    7765 out.go:177] * The control-plane node functional-193000 host is not running: state=Stopped
	I0408 10:38:00.715672    7765 out.go:177]   To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2: exit status 83 (43.648916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:38:00.795321    7769 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:38:00.795478    7769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.795482    7769 out.go:304] Setting ErrFile to fd 2...
	I0408 10:38:00.795484    7769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.795615    7769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:38:00.795846    7769 mustload.go:65] Loading cluster: functional-193000
	I0408 10:38:00.796052    7769 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:38:00.799650    7769 out.go:177] * The control-plane node functional-193000 host is not running: state=Stopped
	I0408 10:38:00.803708    7769 out.go:177]   To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2: exit status 83 (43.594875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:38:00.751447    7767 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:38:00.751628    7767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.751631    7767 out.go:304] Setting ErrFile to fd 2...
	I0408 10:38:00.751633    7767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.751743    7767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:38:00.751965    7767 mustload.go:65] Loading cluster: functional-193000
	I0408 10:38:00.752150    7767 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:38:00.755794    7767 out.go:177] * The control-plane node functional-193000 host is not running: state=Stopped
	I0408 10:38:00.759721    7767 out.go:177]   To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-193000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-193000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-193000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.248542ms)

                                                
                                                
** stderr ** 
	error: context "functional-193000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-193000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 service list: exit status 83 (46.667916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-193000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 service list -o json: exit status 83 (43.890875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-193000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 service --namespace=default --https --url hello-node: exit status 83 (43.90625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-193000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 service hello-node --url --format={{.IP}}: exit status 83 (49.890625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-193000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 service hello-node --url: exit status 83 (43.811042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-193000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test.go:1565: failed to parse "* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"": parse "* The control-plane node functional-193000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-193000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0408 10:37:19.576075    7551 out.go:291] Setting OutFile to fd 1 ...
I0408 10:37:19.576275    7551 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:37:19.576280    7551 out.go:304] Setting ErrFile to fd 2...
I0408 10:37:19.576283    7551 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:37:19.576435    7551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:37:19.576699    7551 mustload.go:65] Loading cluster: functional-193000
I0408 10:37:19.576926    7551 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:37:19.579740    7551 out.go:177] * The control-plane node functional-193000 host is not running: state=Stopped
I0408 10:37:19.587665    7551 out.go:177]   To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
stdout: * The control-plane node functional-193000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-193000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7550: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-193000": client config: context "functional-193000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (119.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-193000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-193000 get svc nginx-svc: exit status 1 (68.999333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-193000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-193000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (119.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image load --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-193000 image load --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr: (1.321967791s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image load --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-193000 image load --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr: (1.329046875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.368064083s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-193000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image load --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-193000 image load --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr: (1.178175375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image save gcr.io/google-containers/addon-resizer:functional-193000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.031007042s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-135000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-135000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.1285955s)

                                                
                                                
-- stdout --
	* [ha-135000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-135000" primary control-plane node in "ha-135000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-135000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:40:21.652373    7826 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:40:21.652518    7826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:40:21.652521    7826 out.go:304] Setting ErrFile to fd 2...
	I0408 10:40:21.652524    7826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:40:21.652648    7826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:40:21.653739    7826 out.go:298] Setting JSON to false
	I0408 10:40:21.670115    7826 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5991,"bootTime":1712592030,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:40:21.670175    7826 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:40:21.676281    7826 out.go:177] * [ha-135000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:40:21.685129    7826 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:40:21.685168    7826 notify.go:220] Checking for updates...
	I0408 10:40:21.689200    7826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:40:21.692034    7826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:40:21.695148    7826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:40:21.698181    7826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:40:21.701098    7826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:40:21.704280    7826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:40:21.708157    7826 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:40:21.715077    7826 start.go:297] selected driver: qemu2
	I0408 10:40:21.715083    7826 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:40:21.715089    7826 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:40:21.717423    7826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:40:21.720156    7826 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:40:21.723150    7826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:40:21.723202    7826 cni.go:84] Creating CNI manager for ""
	I0408 10:40:21.723208    7826 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 10:40:21.723212    7826 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 10:40:21.723246    7826 start.go:340] cluster config:
	{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:40:21.727848    7826 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:40:21.735134    7826 out.go:177] * Starting "ha-135000" primary control-plane node in "ha-135000" cluster
	I0408 10:40:21.739086    7826 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:40:21.739106    7826 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:40:21.739119    7826 cache.go:56] Caching tarball of preloaded images
	I0408 10:40:21.739183    7826 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:40:21.739190    7826 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:40:21.739413    7826 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/ha-135000/config.json ...
	I0408 10:40:21.739427    7826 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/ha-135000/config.json: {Name:mkb59c4926bc6508b6236a860f55fbeb4fa42fc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:40:21.739658    7826 start.go:360] acquireMachinesLock for ha-135000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:40:21.739691    7826 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "ha-135000"
	I0408 10:40:21.739703    7826 start.go:93] Provisioning new machine with config: &{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:40:21.739732    7826 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:40:21.747057    7826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:40:21.764285    7826 start.go:159] libmachine.API.Create for "ha-135000" (driver="qemu2")
	I0408 10:40:21.764309    7826 client.go:168] LocalClient.Create starting
	I0408 10:40:21.764363    7826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:40:21.764392    7826 main.go:141] libmachine: Decoding PEM data...
	I0408 10:40:21.764404    7826 main.go:141] libmachine: Parsing certificate...
	I0408 10:40:21.764453    7826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:40:21.764482    7826 main.go:141] libmachine: Decoding PEM data...
	I0408 10:40:21.764489    7826 main.go:141] libmachine: Parsing certificate...
	I0408 10:40:21.764891    7826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:40:21.908503    7826 main.go:141] libmachine: Creating SSH key...
	I0408 10:40:22.204193    7826 main.go:141] libmachine: Creating Disk image...
	I0408 10:40:22.204202    7826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:40:22.204499    7826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:40:22.217600    7826 main.go:141] libmachine: STDOUT: 
	I0408 10:40:22.217626    7826 main.go:141] libmachine: STDERR: 
	I0408 10:40:22.217698    7826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2 +20000M
	I0408 10:40:22.228357    7826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:40:22.228376    7826 main.go:141] libmachine: STDERR: 
	I0408 10:40:22.228389    7826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:40:22.228394    7826 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:40:22.228422    7826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:79:4a:5d:4c:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:40:22.230115    7826 main.go:141] libmachine: STDOUT: 
	I0408 10:40:22.230131    7826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:40:22.230153    7826 client.go:171] duration metric: took 465.834209ms to LocalClient.Create
	I0408 10:40:24.232364    7826 start.go:128] duration metric: took 2.492593959s to createHost
	I0408 10:40:24.232467    7826 start.go:83] releasing machines lock for "ha-135000", held for 2.492748709s
	W0408 10:40:24.232517    7826 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:40:24.251726    7826 out.go:177] * Deleting "ha-135000" in qemu2 ...
	W0408 10:40:24.281334    7826 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:40:24.281358    7826 start.go:728] Will try again in 5 seconds ...
	I0408 10:40:29.283653    7826 start.go:360] acquireMachinesLock for ha-135000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:40:29.284243    7826 start.go:364] duration metric: took 463.542µs to acquireMachinesLock for "ha-135000"
	I0408 10:40:29.284369    7826 start.go:93] Provisioning new machine with config: &{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:40:29.284613    7826 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:40:29.295170    7826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:40:29.337017    7826 start.go:159] libmachine.API.Create for "ha-135000" (driver="qemu2")
	I0408 10:40:29.337083    7826 client.go:168] LocalClient.Create starting
	I0408 10:40:29.337203    7826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:40:29.337262    7826 main.go:141] libmachine: Decoding PEM data...
	I0408 10:40:29.337279    7826 main.go:141] libmachine: Parsing certificate...
	I0408 10:40:29.337341    7826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:40:29.337386    7826 main.go:141] libmachine: Decoding PEM data...
	I0408 10:40:29.337401    7826 main.go:141] libmachine: Parsing certificate...
	I0408 10:40:29.337914    7826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:40:29.489869    7826 main.go:141] libmachine: Creating SSH key...
	I0408 10:40:29.678417    7826 main.go:141] libmachine: Creating Disk image...
	I0408 10:40:29.678423    7826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:40:29.678691    7826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:40:29.691444    7826 main.go:141] libmachine: STDOUT: 
	I0408 10:40:29.691464    7826 main.go:141] libmachine: STDERR: 
	I0408 10:40:29.691522    7826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2 +20000M
	I0408 10:40:29.702129    7826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:40:29.702155    7826 main.go:141] libmachine: STDERR: 
	I0408 10:40:29.702166    7826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:40:29.702170    7826 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:40:29.702207    7826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:fc:0c:b9:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:40:29.703907    7826 main.go:141] libmachine: STDOUT: 
	I0408 10:40:29.703925    7826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:40:29.703948    7826 client.go:171] duration metric: took 366.857375ms to LocalClient.Create
	I0408 10:40:31.706206    7826 start.go:128] duration metric: took 2.421540459s to createHost
	I0408 10:40:31.706260    7826 start.go:83] releasing machines lock for "ha-135000", held for 2.421972916s
	W0408 10:40:31.706574    7826 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:40:31.721113    7826 out.go:177] 
	W0408 10:40:31.723334    7826 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:40:31.723359    7826 out.go:239] * 
	* 
	W0408 10:40:31.725929    7826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:40:31.733952    7826 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-135000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (68.3825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (110.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.945916ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-135000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- rollout status deployment/busybox: exit status 1 (58.987958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.450167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.633125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.120792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.653ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.57375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.484291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.280333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.368209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.89825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.107041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.482541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.157625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.52125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.608917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.5705ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.125834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (110.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-135000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.511584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-135000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.353792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-135000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-135000 -v=7 --alsologtostderr: exit status 83 (43.444542ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-135000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-135000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:22.059551    7916 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:22.060150    7916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.060156    7916 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:22.060159    7916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.060362    7916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:22.060642    7916 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:22.060966    7916 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:22.064780    7916 out.go:177] * The control-plane node ha-135000 host is not running: state=Stopped
	I0408 10:42:22.068747    7916 out.go:177]   To start a cluster, run: "minikube start -p ha-135000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-135000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.401667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-135000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-135000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.236083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-135000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-135000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-135000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.10225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-135000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-135000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.402917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status --output json -v=7 --alsologtostderr: exit status 7 (31.9745ms)

                                                
                                                
-- stdout --
	{"Name":"ha-135000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:22.299271    7929 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:22.299408    7929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.299411    7929 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:22.299413    7929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.299550    7929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:22.299669    7929 out.go:298] Setting JSON to true
	I0408 10:42:22.299680    7929 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:22.299751    7929 notify.go:220] Checking for updates...
	I0408 10:42:22.299900    7929 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:22.299906    7929 status.go:255] checking status of ha-135000 ...
	I0408 10:42:22.300130    7929 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:22.300133    7929 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:22.300136    7929 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-135000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.242333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.321667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:22.364806    7933 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:22.365055    7933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.365058    7933 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:22.365060    7933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.365201    7933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:22.365462    7933 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:22.365659    7933 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:22.370291    7933 out.go:177] 
	W0408 10:42:22.373318    7933 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0408 10:42:22.373323    7933 out.go:239] * 
	* 
	W0408 10:42:22.376403    7933 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:42:22.379186    7933 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-135000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (32.488667ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:22.414038    7935 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:22.414202    7935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.414206    7935 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:22.414208    7935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.414355    7935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:22.414485    7935 out.go:298] Setting JSON to false
	I0408 10:42:22.414495    7935 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:22.414543    7935 notify.go:220] Checking for updates...
	I0408 10:42:22.414702    7935 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:22.414708    7935 status.go:255] checking status of ha-135000 ...
	I0408 10:42:22.414935    7935 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:22.414939    7935 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:22.414942    7935 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.209875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-135000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.404583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 node start m02 -v=7 --alsologtostderr: exit status 85 (44.301916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:22.585621    7945 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:22.585844    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.585847    7945 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:22.585850    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.585992    7945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:22.586222    7945 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:22.586420    7945 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:22.589852    7945 out.go:177] 
	W0408 10:42:22.591134    7945 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0408 10:42:22.591143    7945 out.go:239] * 
	* 
	W0408 10:42:22.592987    7945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:42:22.595673    7945 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0408 10:42:22.585621    7945 out.go:291] Setting OutFile to fd 1 ...
I0408 10:42:22.585844    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:42:22.585847    7945 out.go:304] Setting ErrFile to fd 2...
I0408 10:42:22.585850    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:42:22.585992    7945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:42:22.586222    7945 mustload.go:65] Loading cluster: ha-135000
I0408 10:42:22.586420    7945 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:42:22.589852    7945 out.go:177] 
W0408 10:42:22.591134    7945 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0408 10:42:22.591143    7945 out.go:239] * 
* 
W0408 10:42:22.592987    7945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 10:42:22.595673    7945 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-135000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (32.588167ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:22.630663    7947 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:22.630808    7947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.630811    7947 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:22.630813    7947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:22.630935    7947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:22.631065    7947 out.go:298] Setting JSON to false
	I0408 10:42:22.631076    7947 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:22.631140    7947 notify.go:220] Checking for updates...
	I0408 10:42:22.631277    7947 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:22.631283    7947 status.go:255] checking status of ha-135000 ...
	I0408 10:42:22.631484    7947 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:22.631488    7947 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:22.631490    7947 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (74.706166ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:23.228909    7949 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:23.229101    7949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:23.229105    7949 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:23.229108    7949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:23.229271    7949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:23.229431    7949 out.go:298] Setting JSON to false
	I0408 10:42:23.229446    7949 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:23.229491    7949 notify.go:220] Checking for updates...
	I0408 10:42:23.229678    7949 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:23.229690    7949 status.go:255] checking status of ha-135000 ...
	I0408 10:42:23.229968    7949 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:23.229973    7949 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:23.229976    7949 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (75.398ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:25.350578    7951 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:25.350804    7951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:25.350808    7951 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:25.350812    7951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:25.350972    7951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:25.351135    7951 out.go:298] Setting JSON to false
	I0408 10:42:25.351149    7951 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:25.351188    7951 notify.go:220] Checking for updates...
	I0408 10:42:25.351432    7951 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:25.351441    7951 status.go:255] checking status of ha-135000 ...
	I0408 10:42:25.351715    7951 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:25.351720    7951 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:25.351723    7951 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (73.839375ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:27.912229    7953 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:27.912426    7953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:27.912430    7953 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:27.912433    7953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:27.912623    7953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:27.912793    7953 out.go:298] Setting JSON to false
	I0408 10:42:27.912810    7953 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:27.912884    7953 notify.go:220] Checking for updates...
	I0408 10:42:27.913057    7953 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:27.913065    7953 status.go:255] checking status of ha-135000 ...
	I0408 10:42:27.913328    7953 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:27.913333    7953 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:27.913336    7953 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (76.77275ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:31.038198    7955 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:31.038384    7955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:31.038388    7955 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:31.038392    7955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:31.038552    7955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:31.038715    7955 out.go:298] Setting JSON to false
	I0408 10:42:31.038733    7955 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:31.038763    7955 notify.go:220] Checking for updates...
	I0408 10:42:31.039017    7955 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:31.039025    7955 status.go:255] checking status of ha-135000 ...
	I0408 10:42:31.039270    7955 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:31.039275    7955 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:31.039278    7955 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (76.513458ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:34.147960    7959 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:34.148133    7959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:34.148137    7959 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:34.148140    7959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:34.148305    7959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:34.148469    7959 out.go:298] Setting JSON to false
	I0408 10:42:34.148484    7959 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:34.148528    7959 notify.go:220] Checking for updates...
	I0408 10:42:34.148750    7959 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:34.148757    7959 status.go:255] checking status of ha-135000 ...
	I0408 10:42:34.149016    7959 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:34.149021    7959 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:34.149023    7959 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (75.794458ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:43.645035    7961 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:43.645215    7961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:43.645219    7961 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:43.645222    7961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:43.645414    7961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:43.645580    7961 out.go:298] Setting JSON to false
	I0408 10:42:43.645601    7961 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:43.645624    7961 notify.go:220] Checking for updates...
	I0408 10:42:43.645847    7961 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:43.645855    7961 status.go:255] checking status of ha-135000 ...
	I0408 10:42:43.646174    7961 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:43.646179    7961 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:43.646182    7961 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (75.504417ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:42:49.668686    7968 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:42:49.668897    7968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:49.668902    7968 out.go:304] Setting ErrFile to fd 2...
	I0408 10:42:49.668905    7968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:42:49.669078    7968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:42:49.669251    7968 out.go:298] Setting JSON to false
	I0408 10:42:49.669266    7968 mustload.go:65] Loading cluster: ha-135000
	I0408 10:42:49.669309    7968 notify.go:220] Checking for updates...
	I0408 10:42:49.669533    7968 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:42:49.669542    7968 status.go:255] checking status of ha-135000 ...
	I0408 10:42:49.669818    7968 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:42:49.669823    7968 status.go:343] host is not running, skipping remaining checks
	I0408 10:42:49.669826    7968 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (76.085042ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:04.558092    7970 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:04.558244    7970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:04.558248    7970 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:04.558251    7970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:04.558393    7970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:04.558546    7970 out.go:298] Setting JSON to false
	I0408 10:43:04.558569    7970 mustload.go:65] Loading cluster: ha-135000
	I0408 10:43:04.558612    7970 notify.go:220] Checking for updates...
	I0408 10:43:04.558795    7970 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:04.558802    7970 status.go:255] checking status of ha-135000 ...
	I0408 10:43:04.559059    7970 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:43:04.559065    7970 status.go:343] host is not running, skipping remaining checks
	I0408 10:43:04.559068    7970 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (77.614333ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:19.749196    7972 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:19.749351    7972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:19.749358    7972 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:19.749360    7972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:19.749505    7972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:19.749670    7972 out.go:298] Setting JSON to false
	I0408 10:43:19.749686    7972 mustload.go:65] Loading cluster: ha-135000
	I0408 10:43:19.749707    7972 notify.go:220] Checking for updates...
	I0408 10:43:19.749909    7972 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:19.749915    7972 status.go:255] checking status of ha-135000 ...
	I0408 10:43:19.750153    7972 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:43:19.750157    7972 status.go:343] host is not running, skipping remaining checks
	I0408 10:43:19.750160    7972 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (35.1605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-135000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-135000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.0015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-135000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-135000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-135000 -v=7 --alsologtostderr: (2.1121955s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-135000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-135000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229317709s)

                                                
                                                
-- stdout --
	* [ha-135000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-135000" primary control-plane node in "ha-135000" cluster
	* Restarting existing qemu2 VM for "ha-135000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-135000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:22.111025    7996 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:22.111227    7996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:22.111231    7996 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:22.111235    7996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:22.111402    7996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:22.112682    7996 out.go:298] Setting JSON to false
	I0408 10:43:22.131853    7996 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6172,"bootTime":1712592030,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:43:22.131905    7996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:43:22.136667    7996 out.go:177] * [ha-135000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:43:22.142577    7996 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:43:22.142634    7996 notify.go:220] Checking for updates...
	I0408 10:43:22.146621    7996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:43:22.149618    7996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:43:22.152576    7996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:43:22.155689    7996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:43:22.158632    7996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:43:22.160425    7996 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:22.160479    7996 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:43:22.164633    7996 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:43:22.171452    7996 start.go:297] selected driver: qemu2
	I0408 10:43:22.171459    7996 start.go:901] validating driver "qemu2" against &{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:43:22.171521    7996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:43:22.173806    7996 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:43:22.173852    7996 cni.go:84] Creating CNI manager for ""
	I0408 10:43:22.173857    7996 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 10:43:22.173917    7996 start.go:340] cluster config:
	{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:43:22.178267    7996 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:43:22.186613    7996 out.go:177] * Starting "ha-135000" primary control-plane node in "ha-135000" cluster
	I0408 10:43:22.190640    7996 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:43:22.190654    7996 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:43:22.190663    7996 cache.go:56] Caching tarball of preloaded images
	I0408 10:43:22.190721    7996 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:43:22.190726    7996 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:43:22.190788    7996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/ha-135000/config.json ...
	I0408 10:43:22.191245    7996 start.go:360] acquireMachinesLock for ha-135000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:43:22.191278    7996 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "ha-135000"
	I0408 10:43:22.191288    7996 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:43:22.191292    7996 fix.go:54] fixHost starting: 
	I0408 10:43:22.191408    7996 fix.go:112] recreateIfNeeded on ha-135000: state=Stopped err=<nil>
	W0408 10:43:22.191416    7996 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:43:22.199555    7996 out.go:177] * Restarting existing qemu2 VM for "ha-135000" ...
	I0408 10:43:22.203632    7996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:fc:0c:b9:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:43:22.205930    7996 main.go:141] libmachine: STDOUT: 
	I0408 10:43:22.205950    7996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:43:22.205981    7996 fix.go:56] duration metric: took 14.687375ms for fixHost
	I0408 10:43:22.205986    7996 start.go:83] releasing machines lock for "ha-135000", held for 14.702959ms
	W0408 10:43:22.205992    7996 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:43:22.206022    7996 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:43:22.206027    7996 start.go:728] Will try again in 5 seconds ...
	I0408 10:43:27.208187    7996 start.go:360] acquireMachinesLock for ha-135000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:43:27.208542    7996 start.go:364] duration metric: took 260.667µs to acquireMachinesLock for "ha-135000"
	I0408 10:43:27.208676    7996 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:43:27.208694    7996 fix.go:54] fixHost starting: 
	I0408 10:43:27.209340    7996 fix.go:112] recreateIfNeeded on ha-135000: state=Stopped err=<nil>
	W0408 10:43:27.209368    7996 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:43:27.217741    7996 out.go:177] * Restarting existing qemu2 VM for "ha-135000" ...
	I0408 10:43:27.221874    7996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:fc:0c:b9:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:43:27.231203    7996 main.go:141] libmachine: STDOUT: 
	I0408 10:43:27.231281    7996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:43:27.231368    7996 fix.go:56] duration metric: took 22.6695ms for fixHost
	I0408 10:43:27.231388    7996 start.go:83] releasing machines lock for "ha-135000", held for 22.825209ms
	W0408 10:43:27.231574    7996 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-135000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-135000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:43:27.239678    7996 out.go:177] 
	W0408 10:43:27.243676    7996 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:43:27.243700    7996 out.go:239] * 
	* 
	W0408 10:43:27.246395    7996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:43:27.254646    7996 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-135000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-135000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (34.746375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.724333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-135000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-135000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:27.405758    8008 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:27.406172    8008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:27.406184    8008 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:27.406187    8008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:27.406343    8008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:27.406544    8008 mustload.go:65] Loading cluster: ha-135000
	I0408 10:43:27.406747    8008 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:27.410261    8008 out.go:177] * The control-plane node ha-135000 host is not running: state=Stopped
	I0408 10:43:27.414047    8008 out.go:177]   To start a cluster, run: "minikube start -p ha-135000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-135000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (32.825ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:27.449094    8010 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:27.449241    8010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:27.449244    8010 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:27.449247    8010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:27.449372    8010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:27.449487    8010 out.go:298] Setting JSON to false
	I0408 10:43:27.449499    8010 mustload.go:65] Loading cluster: ha-135000
	I0408 10:43:27.449549    8010 notify.go:220] Checking for updates...
	I0408 10:43:27.449701    8010 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:27.449709    8010 status.go:255] checking status of ha-135000 ...
	I0408 10:43:27.449909    8010 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:43:27.449912    8010 status.go:343] host is not running, skipping remaining checks
	I0408 10:43:27.449915    8010 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.561458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-135000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (31.634708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-135000 stop -v=7 --alsologtostderr: (3.343366792s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr: exit status 7 (67.128833ms)

                                                
                                                
-- stdout --
	ha-135000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:30.998816    8038 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:30.998987    8038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:30.998992    8038 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:30.998994    8038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:30.999162    8038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:30.999315    8038 out.go:298] Setting JSON to false
	I0408 10:43:30.999329    8038 mustload.go:65] Loading cluster: ha-135000
	I0408 10:43:30.999363    8038 notify.go:220] Checking for updates...
	I0408 10:43:30.999574    8038 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:30.999581    8038 status.go:255] checking status of ha-135000 ...
	I0408 10:43:30.999829    8038 status.go:330] ha-135000 host status = "Stopped" (err=<nil>)
	I0408 10:43:30.999833    8038 status.go:343] host is not running, skipping remaining checks
	I0408 10:43:30.999836    8038 status.go:257] ha-135000 status: &{Name:ha-135000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-135000 status -v=7 --alsologtostderr": ha-135000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (34.50875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-135000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-135000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183452541s)

                                                
                                                
-- stdout --
	* [ha-135000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-135000" primary control-plane node in "ha-135000" cluster
	* Restarting existing qemu2 VM for "ha-135000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-135000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:31.065970    8042 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:31.066110    8042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:31.066114    8042 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:31.066116    8042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:31.066245    8042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:31.067288    8042 out.go:298] Setting JSON to false
	I0408 10:43:31.083220    8042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6181,"bootTime":1712592030,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:43:31.083286    8042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:43:31.088940    8042 out.go:177] * [ha-135000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:43:31.095860    8042 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:43:31.099899    8042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:43:31.095916    8042 notify.go:220] Checking for updates...
	I0408 10:43:31.102881    8042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:43:31.105837    8042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:43:31.108834    8042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:43:31.111770    8042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:43:31.115105    8042 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:31.115353    8042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:43:31.119838    8042 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:43:31.126848    8042 start.go:297] selected driver: qemu2
	I0408 10:43:31.126855    8042 start.go:901] validating driver "qemu2" against &{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:43:31.126910    8042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:43:31.129328    8042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:43:31.129378    8042 cni.go:84] Creating CNI manager for ""
	I0408 10:43:31.129384    8042 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 10:43:31.129430    8042 start.go:340] cluster config:
	{Name:ha-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-135000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:43:31.133722    8042 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:43:31.140917    8042 out.go:177] * Starting "ha-135000" primary control-plane node in "ha-135000" cluster
	I0408 10:43:31.146038    8042 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:43:31.146052    8042 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:43:31.146058    8042 cache.go:56] Caching tarball of preloaded images
	I0408 10:43:31.146104    8042 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:43:31.146109    8042 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:43:31.146157    8042 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/ha-135000/config.json ...
	I0408 10:43:31.146620    8042 start.go:360] acquireMachinesLock for ha-135000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:43:31.146649    8042 start.go:364] duration metric: took 22.709µs to acquireMachinesLock for "ha-135000"
	I0408 10:43:31.146658    8042 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:43:31.146663    8042 fix.go:54] fixHost starting: 
	I0408 10:43:31.146771    8042 fix.go:112] recreateIfNeeded on ha-135000: state=Stopped err=<nil>
	W0408 10:43:31.146782    8042 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:43:31.154807    8042 out.go:177] * Restarting existing qemu2 VM for "ha-135000" ...
	I0408 10:43:31.158892    8042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:fc:0c:b9:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:43:31.160905    8042 main.go:141] libmachine: STDOUT: 
	I0408 10:43:31.160922    8042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:43:31.160948    8042 fix.go:56] duration metric: took 14.283375ms for fixHost
	I0408 10:43:31.160953    8042 start.go:83] releasing machines lock for "ha-135000", held for 14.299416ms
	W0408 10:43:31.160958    8042 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:43:31.160986    8042 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:43:31.160990    8042 start.go:728] Will try again in 5 seconds ...
	I0408 10:43:36.163238    8042 start.go:360] acquireMachinesLock for ha-135000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:43:36.163704    8042 start.go:364] duration metric: took 380.709µs to acquireMachinesLock for "ha-135000"
	I0408 10:43:36.163842    8042 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:43:36.163862    8042 fix.go:54] fixHost starting: 
	I0408 10:43:36.164577    8042 fix.go:112] recreateIfNeeded on ha-135000: state=Stopped err=<nil>
	W0408 10:43:36.164604    8042 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:43:36.169922    8042 out.go:177] * Restarting existing qemu2 VM for "ha-135000" ...
	I0408 10:43:36.173928    8042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:fc:0c:b9:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/ha-135000/disk.qcow2
	I0408 10:43:36.183222    8042 main.go:141] libmachine: STDOUT: 
	I0408 10:43:36.183345    8042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:43:36.183421    8042 fix.go:56] duration metric: took 19.557542ms for fixHost
	I0408 10:43:36.183442    8042 start.go:83] releasing machines lock for "ha-135000", held for 19.714291ms
	W0408 10:43:36.183609    8042 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-135000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-135000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:43:36.190882    8042 out.go:177] 
	W0408 10:43:36.194907    8042 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:43:36.194935    8042 out.go:239] * 
	* 
	W0408 10:43:36.197331    8042 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:43:36.204843    8042 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-135000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (69.152125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-135000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.096083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-135000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-135000 --control-plane -v=7 --alsologtostderr: exit status 83 (45.599958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-135000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-135000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:43:36.443120    8058 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:43:36.443283    8058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:36.443287    8058 out.go:304] Setting ErrFile to fd 2...
	I0408 10:43:36.443289    8058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:43:36.443425    8058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:43:36.443659    8058 mustload.go:65] Loading cluster: ha-135000
	I0408 10:43:36.443842    8058 config.go:182] Loaded profile config "ha-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:43:36.448099    8058 out.go:177] * The control-plane node ha-135000 host is not running: state=Stopped
	I0408 10:43:36.451869    8058 out.go:177]   To start a cluster, run: "minikube start -p ha-135000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-135000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (32.139625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-135000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-135000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-135000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-135000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-135000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-135000 -n ha-135000: exit status 7 (31.905709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-272000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-272000 --driver=qemu2 : exit status 80 (9.805877333s)

                                                
                                                
-- stdout --
	* [image-272000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-272000" primary control-plane node in "image-272000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-272000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-272000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-272000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-272000 -n image-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-272000 -n image-272000: exit status 7 (69.057375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-272000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-860000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-860000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.813302833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"addfcbf7-01d7-43d4-a6d2-9144287e6527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-860000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dec89afb-d340-4b90-b223-b85f45740c21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18585"}}
	{"specversion":"1.0","id":"ea3171a1-95c5-4aee-a058-641e8fa64fe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig"}}
	{"specversion":"1.0","id":"e35158da-783b-4a58-a902-08518af2ff23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"15258955-e123-49f7-b182-502f76a37c4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"726ea52b-c943-4cfb-82b0-2dec33f7a0ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube"}}
	{"specversion":"1.0","id":"c9ee5883-a2e5-4451-a579-a628c69f0103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d21cf175-fb7d-404a-a2aa-b0681c359f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bac525a3-8473-4796-885b-7ef91149cbe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"8372ca51-bfa4-46da-9b02-bf65221e9eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-860000\" primary control-plane node in \"json-output-860000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e603ea5-1fa7-4efb-a3e0-2bddb8f98ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a2e1e4bc-15d3-4222-adf9-1521ddc8752e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-860000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"19bc2969-86e4-4fdb-b897-50be83a83222","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"df91c44d-52e3-4378-8c5e-9027ffdc9137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"59377797-d447-4ae9-91d3-14fd47cedb88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-860000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a74262fa-0885-4886-baad-35d5e6083d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"84bad320-ab82-428c-b1d9-702adff8eff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-860000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-860000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-860000 --output=json --user=testUser: exit status 83 (80.057292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"80a1ff6e-2cce-42a9-94c2-4410458dede7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-860000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"321944a7-9dfd-484b-9d27-afa8724a384b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-860000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-860000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-860000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-860000 --output=json --user=testUser: exit status 83 (48.371375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-860000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-860000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-860000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-748000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-748000 --driver=qemu2 : exit status 80 (10.036025042s)

                                                
                                                
-- stdout --
	* [first-748000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-748000" primary control-plane node in "first-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-748000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-08 10:44:09.236534 -0700 PDT m=+518.121486251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-749000 -n second-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-749000 -n second-749000: exit status 85 (82.787625ms)

                                                
                                                
-- stdout --
	* Profile "second-749000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-749000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-749000" host is not running, skipping log retrieval (state="* Profile \"second-749000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-749000\"")
helpers_test.go:175: Cleaning up "second-749000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-749000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-08 10:44:09.555553 -0700 PDT m=+518.440503418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-748000 -n first-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-748000 -n first-748000: exit status 7 (32.656542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-748000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-748000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-748000
--- FAIL: TestMinikubeProfile (10.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-565000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-565000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.909975959s)

                                                
                                                
-- stdout --
	* [mount-start-1-565000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-565000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-565000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-565000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-565000 -n mount-start-1-565000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-565000 -n mount-start-1-565000: exit status 7 (70.762541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-565000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-529000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-529000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.891667084s)

                                                
                                                
-- stdout --
	* [multinode-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-529000" primary control-plane node in "multinode-529000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-529000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:44:20.038403    8224 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:44:20.038522    8224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:44:20.038525    8224 out.go:304] Setting ErrFile to fd 2...
	I0408 10:44:20.038528    8224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:44:20.038668    8224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:44:20.039674    8224 out.go:298] Setting JSON to false
	I0408 10:44:20.055689    8224 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6230,"bootTime":1712592030,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:44:20.055747    8224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:44:20.062552    8224 out.go:177] * [multinode-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:44:20.070537    8224 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:44:20.070572    8224 notify.go:220] Checking for updates...
	I0408 10:44:20.078449    8224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:44:20.081477    8224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:44:20.084515    8224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:44:20.087482    8224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:44:20.090469    8224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:44:20.093650    8224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:44:20.097394    8224 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:44:20.104476    8224 start.go:297] selected driver: qemu2
	I0408 10:44:20.104483    8224 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:44:20.104496    8224 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:44:20.106951    8224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:44:20.110437    8224 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:44:20.113579    8224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:44:20.113626    8224 cni.go:84] Creating CNI manager for ""
	I0408 10:44:20.113633    8224 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 10:44:20.113639    8224 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 10:44:20.113676    8224 start.go:340] cluster config:
	{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:44:20.118068    8224 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:44:20.125438    8224 out.go:177] * Starting "multinode-529000" primary control-plane node in "multinode-529000" cluster
	I0408 10:44:20.129488    8224 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:44:20.129504    8224 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:44:20.129513    8224 cache.go:56] Caching tarball of preloaded images
	I0408 10:44:20.129572    8224 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:44:20.129579    8224 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:44:20.129785    8224 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/multinode-529000/config.json ...
	I0408 10:44:20.129800    8224 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/multinode-529000/config.json: {Name:mk67b356d4b71ae88497a9d2bce0a7bab9f59ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:44:20.130021    8224 start.go:360] acquireMachinesLock for multinode-529000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:44:20.130052    8224 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "multinode-529000"
	I0408 10:44:20.130064    8224 start.go:93] Provisioning new machine with config: &{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:44:20.130107    8224 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:44:20.138520    8224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:44:20.155883    8224 start.go:159] libmachine.API.Create for "multinode-529000" (driver="qemu2")
	I0408 10:44:20.155916    8224 client.go:168] LocalClient.Create starting
	I0408 10:44:20.155974    8224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:44:20.156003    8224 main.go:141] libmachine: Decoding PEM data...
	I0408 10:44:20.156013    8224 main.go:141] libmachine: Parsing certificate...
	I0408 10:44:20.156047    8224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:44:20.156069    8224 main.go:141] libmachine: Decoding PEM data...
	I0408 10:44:20.156076    8224 main.go:141] libmachine: Parsing certificate...
	I0408 10:44:20.156459    8224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:44:20.297132    8224 main.go:141] libmachine: Creating SSH key...
	I0408 10:44:20.393205    8224 main.go:141] libmachine: Creating Disk image...
	I0408 10:44:20.393210    8224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:44:20.393439    8224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:44:20.405781    8224 main.go:141] libmachine: STDOUT: 
	I0408 10:44:20.405798    8224 main.go:141] libmachine: STDERR: 
	I0408 10:44:20.405850    8224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2 +20000M
	I0408 10:44:20.416394    8224 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:44:20.416411    8224 main.go:141] libmachine: STDERR: 
	I0408 10:44:20.416426    8224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:44:20.416429    8224 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:44:20.416468    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:65:b7:1c:8e:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:44:20.418178    8224 main.go:141] libmachine: STDOUT: 
	I0408 10:44:20.418193    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:44:20.418213    8224 client.go:171] duration metric: took 262.289875ms to LocalClient.Create
	I0408 10:44:22.420419    8224 start.go:128] duration metric: took 2.290274541s to createHost
	I0408 10:44:22.420511    8224 start.go:83] releasing machines lock for "multinode-529000", held for 2.290404792s
	W0408 10:44:22.420570    8224 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:44:22.436768    8224 out.go:177] * Deleting "multinode-529000" in qemu2 ...
	W0408 10:44:22.466811    8224 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:44:22.466842    8224 start.go:728] Will try again in 5 seconds ...
	I0408 10:44:27.469078    8224 start.go:360] acquireMachinesLock for multinode-529000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:44:27.469498    8224 start.go:364] duration metric: took 328.709µs to acquireMachinesLock for "multinode-529000"
	I0408 10:44:27.469635    8224 start.go:93] Provisioning new machine with config: &{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:44:27.469963    8224 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:44:27.476746    8224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:44:27.523967    8224 start.go:159] libmachine.API.Create for "multinode-529000" (driver="qemu2")
	I0408 10:44:27.524019    8224 client.go:168] LocalClient.Create starting
	I0408 10:44:27.524135    8224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:44:27.524196    8224 main.go:141] libmachine: Decoding PEM data...
	I0408 10:44:27.524218    8224 main.go:141] libmachine: Parsing certificate...
	I0408 10:44:27.524280    8224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:44:27.524321    8224 main.go:141] libmachine: Decoding PEM data...
	I0408 10:44:27.524336    8224 main.go:141] libmachine: Parsing certificate...
	I0408 10:44:27.524865    8224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:44:27.676835    8224 main.go:141] libmachine: Creating SSH key...
	I0408 10:44:27.823848    8224 main.go:141] libmachine: Creating Disk image...
	I0408 10:44:27.823854    8224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:44:27.824124    8224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:44:27.837049    8224 main.go:141] libmachine: STDOUT: 
	I0408 10:44:27.837071    8224 main.go:141] libmachine: STDERR: 
	I0408 10:44:27.837134    8224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2 +20000M
	I0408 10:44:27.848014    8224 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:44:27.848030    8224 main.go:141] libmachine: STDERR: 
	I0408 10:44:27.848039    8224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:44:27.848042    8224 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:44:27.848075    8224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f8:ff:71:b1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:44:27.849788    8224 main.go:141] libmachine: STDOUT: 
	I0408 10:44:27.849802    8224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:44:27.849815    8224 client.go:171] duration metric: took 325.789834ms to LocalClient.Create
	I0408 10:44:29.852105    8224 start.go:128] duration metric: took 2.382081208s to createHost
	I0408 10:44:29.852198    8224 start.go:83] releasing machines lock for "multinode-529000", held for 2.382659666s
	W0408 10:44:29.852516    8224 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:44:29.861959    8224 out.go:177] 
	W0408 10:44:29.871185    8224 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:44:29.871280    8224 out.go:239] * 
	* 
	W0408 10:44:29.874194    8224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:44:29.883986    8224 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-529000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (69.643417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (95.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.852375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-529000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- rollout status deployment/busybox: exit status 1 (59.103791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.482459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.05625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.440916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.977083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.993ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.717708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.241167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.021041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.964708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.345125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.467083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.264959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.155083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.056583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.299916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.435417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (95.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-529000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.63875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.462375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-529000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-529000 -v 3 --alsologtostderr: exit status 83 (44.279125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-529000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:05.320907    8330 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:05.321096    8330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.321099    8330 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:05.321102    8330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.321222    8330 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:05.321471    8330 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:05.321663    8330 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:05.326338    8330 out.go:177] * The control-plane node multinode-529000 host is not running: state=Stopped
	I0408 10:46:05.329170    8330 out.go:177]   To start a cluster, run: "minikube start -p multinode-529000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-529000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.525833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-529000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-529000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.633417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-529000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-529000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-529000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.29825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-529000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-529000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-529000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-529000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.248625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status --output json --alsologtostderr: exit status 7 (32.403667ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-529000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:05.560314    8343 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:05.560449    8343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.560452    8343 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:05.560455    8343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.560580    8343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:05.560701    8343 out.go:298] Setting JSON to true
	I0408 10:46:05.560712    8343 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:05.560778    8343 notify.go:220] Checking for updates...
	I0408 10:46:05.560910    8343 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:05.560916    8343 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:05.561104    8343 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:05.561107    8343 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:05.561110    8343 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-529000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.381083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 node stop m03: exit status 85 (49.485917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-529000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status: exit status 7 (31.895792ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr: exit status 7 (32.242666ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:05.707138    8351 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:05.707271    8351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.707274    8351 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:05.707276    8351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.707419    8351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:05.707539    8351 out.go:298] Setting JSON to false
	I0408 10:46:05.707551    8351 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:05.707605    8351 notify.go:220] Checking for updates...
	I0408 10:46:05.707748    8351 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:05.707754    8351 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:05.707956    8351 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:05.707961    8351 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:05.707963    8351 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr": multinode-529000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (31.750541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.761458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:05.771801    8355 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:05.772213    8355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.772217    8355 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:05.772219    8355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.772388    8355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:05.772610    8355 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:05.772791    8355 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:05.775991    8355 out.go:177] 
	W0408 10:46:05.779700    8355 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0408 10:46:05.779704    8355 out.go:239] * 
	* 
	W0408 10:46:05.781676    8355 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:46:05.785765    8355 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0408 10:46:05.771801    8355 out.go:291] Setting OutFile to fd 1 ...
I0408 10:46:05.772213    8355 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:46:05.772217    8355 out.go:304] Setting ErrFile to fd 2...
I0408 10:46:05.772219    8355 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 10:46:05.772388    8355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
I0408 10:46:05.772610    8355 mustload.go:65] Loading cluster: multinode-529000
I0408 10:46:05.772791    8355 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0408 10:46:05.775991    8355 out.go:177] 
W0408 10:46:05.779700    8355 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0408 10:46:05.779704    8355 out.go:239] * 
* 
W0408 10:46:05.781676    8355 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 10:46:05.785765    8355 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-529000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (32.52375ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:05.821007    8357 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:05.821369    8357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.821373    8357 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:05.821376    8357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:05.821559    8357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:05.821710    8357 out.go:298] Setting JSON to false
	I0408 10:46:05.821724    8357 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:05.822007    8357 notify.go:220] Checking for updates...
	I0408 10:46:05.822173    8357 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:05.822180    8357 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:05.822390    8357 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:05.822394    8357 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:05.822396    8357 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (75.43ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:07.290859    8359 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:07.291031    8359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:07.291040    8359 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:07.291043    8359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:07.291230    8359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:07.291431    8359 out.go:298] Setting JSON to false
	I0408 10:46:07.291447    8359 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:07.291474    8359 notify.go:220] Checking for updates...
	I0408 10:46:07.291727    8359 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:07.291735    8359 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:07.291994    8359 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:07.291999    8359 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:07.292002    8359 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (75.670791ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:08.399339    8361 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:08.399515    8361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:08.399519    8361 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:08.399522    8361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:08.399667    8361 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:08.399814    8361 out.go:298] Setting JSON to false
	I0408 10:46:08.399830    8361 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:08.399866    8361 notify.go:220] Checking for updates...
	I0408 10:46:08.400090    8361 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:08.400096    8361 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:08.400384    8361 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:08.400389    8361 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:08.400392    8361 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (76.992833ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:10.718728    8363 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:10.718906    8363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:10.718911    8363 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:10.718914    8363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:10.719084    8363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:10.719233    8363 out.go:298] Setting JSON to false
	I0408 10:46:10.719250    8363 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:10.719294    8363 notify.go:220] Checking for updates...
	I0408 10:46:10.719492    8363 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:10.719500    8363 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:10.719775    8363 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:10.719780    8363 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:10.719782    8363 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (77.273875ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:12.740577    8367 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:12.740778    8367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:12.740782    8367 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:12.740785    8367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:12.740970    8367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:12.741145    8367 out.go:298] Setting JSON to false
	I0408 10:46:12.741161    8367 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:12.741189    8367 notify.go:220] Checking for updates...
	I0408 10:46:12.741442    8367 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:12.741454    8367 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:12.741726    8367 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:12.741731    8367 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:12.741734    8367 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (75.573583ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:17.083825    8369 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:17.084015    8369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:17.084019    8369 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:17.084023    8369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:17.084196    8369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:17.084370    8369 out.go:298] Setting JSON to false
	I0408 10:46:17.084389    8369 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:17.084421    8369 notify.go:220] Checking for updates...
	I0408 10:46:17.084654    8369 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:17.084663    8369 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:17.084926    8369 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:17.084931    8369 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:17.084934    8369 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (77.996208ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:22.450139    8373 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:22.450365    8373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:22.450370    8373 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:22.450373    8373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:22.450552    8373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:22.450698    8373 out.go:298] Setting JSON to false
	I0408 10:46:22.450713    8373 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:22.450752    8373 notify.go:220] Checking for updates...
	I0408 10:46:22.450968    8373 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:22.450975    8373 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:22.451270    8373 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:22.451274    8373 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:22.451277    8373 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (75.966958ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:28.655971    8375 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:28.656184    8375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:28.656188    8375 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:28.656192    8375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:28.656356    8375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:28.656519    8375 out.go:298] Setting JSON to false
	I0408 10:46:28.656534    8375 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:28.656578    8375 notify.go:220] Checking for updates...
	I0408 10:46:28.656792    8375 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:28.656798    8375 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:28.657061    8375 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:28.657066    8375 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:28.657069    8375 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr: exit status 7 (74.240708ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:51.454066    8385 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:51.454248    8385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:51.454253    8385 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:51.454256    8385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:51.454405    8385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:51.454574    8385 out.go:298] Setting JSON to false
	I0408 10:46:51.454590    8385 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:46:51.454623    8385 notify.go:220] Checking for updates...
	I0408 10:46:51.454909    8385 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:51.454917    8385 status.go:255] checking status of multinode-529000 ...
	I0408 10:46:51.455181    8385 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:46:51.455186    8385 status.go:343] host is not running, skipping remaining checks
	I0408 10:46:51.455189    8385 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-529000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (35.197084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-529000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-529000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-529000: (3.269549208s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-529000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-529000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226363209s)

                                                
                                                
-- stdout --
	* [multinode-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-529000" primary control-plane node in "multinode-529000" cluster
	* Restarting existing qemu2 VM for "multinode-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:46:54.859149    8409 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:46:54.859321    8409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:54.859326    8409 out.go:304] Setting ErrFile to fd 2...
	I0408 10:46:54.859329    8409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:46:54.859495    8409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:46:54.860674    8409 out.go:298] Setting JSON to false
	I0408 10:46:54.879199    8409 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6384,"bootTime":1712592030,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:46:54.879256    8409 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:46:54.884688    8409 out.go:177] * [multinode-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:46:54.892578    8409 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:46:54.892611    8409 notify.go:220] Checking for updates...
	I0408 10:46:54.896672    8409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:46:54.899616    8409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:46:54.902651    8409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:46:54.905650    8409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:46:54.908652    8409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:46:54.911951    8409 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:46:54.912010    8409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:46:54.916597    8409 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:46:54.923607    8409 start.go:297] selected driver: qemu2
	I0408 10:46:54.923615    8409 start.go:901] validating driver "qemu2" against &{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:46:54.923670    8409 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:46:54.926118    8409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:46:54.926162    8409 cni.go:84] Creating CNI manager for ""
	I0408 10:46:54.926167    8409 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 10:46:54.926225    8409 start.go:340] cluster config:
	{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:46:54.930764    8409 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:46:54.937446    8409 out.go:177] * Starting "multinode-529000" primary control-plane node in "multinode-529000" cluster
	I0408 10:46:54.941624    8409 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:46:54.941640    8409 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:46:54.941646    8409 cache.go:56] Caching tarball of preloaded images
	I0408 10:46:54.941708    8409 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:46:54.941713    8409 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:46:54.941760    8409 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/multinode-529000/config.json ...
	I0408 10:46:54.942184    8409 start.go:360] acquireMachinesLock for multinode-529000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:46:54.942217    8409 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "multinode-529000"
	I0408 10:46:54.942226    8409 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:46:54.942230    8409 fix.go:54] fixHost starting: 
	I0408 10:46:54.942350    8409 fix.go:112] recreateIfNeeded on multinode-529000: state=Stopped err=<nil>
	W0408 10:46:54.942360    8409 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:46:54.950640    8409 out.go:177] * Restarting existing qemu2 VM for "multinode-529000" ...
	I0408 10:46:54.954628    8409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f8:ff:71:b1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:46:54.956911    8409 main.go:141] libmachine: STDOUT: 
	I0408 10:46:54.956937    8409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:46:54.956968    8409 fix.go:56] duration metric: took 14.736625ms for fixHost
	I0408 10:46:54.956973    8409 start.go:83] releasing machines lock for "multinode-529000", held for 14.75025ms
	W0408 10:46:54.956983    8409 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:46:54.957016    8409 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:46:54.957021    8409 start.go:728] Will try again in 5 seconds ...
	I0408 10:46:59.959255    8409 start.go:360] acquireMachinesLock for multinode-529000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:46:59.959634    8409 start.go:364] duration metric: took 309.625µs to acquireMachinesLock for "multinode-529000"
	I0408 10:46:59.959753    8409 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:46:59.959777    8409 fix.go:54] fixHost starting: 
	I0408 10:46:59.960577    8409 fix.go:112] recreateIfNeeded on multinode-529000: state=Stopped err=<nil>
	W0408 10:46:59.960602    8409 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:46:59.965916    8409 out.go:177] * Restarting existing qemu2 VM for "multinode-529000" ...
	I0408 10:46:59.973964    8409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f8:ff:71:b1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:46:59.983055    8409 main.go:141] libmachine: STDOUT: 
	I0408 10:46:59.983118    8409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:46:59.983182    8409 fix.go:56] duration metric: took 23.407625ms for fixHost
	I0408 10:46:59.983197    8409 start.go:83] releasing machines lock for "multinode-529000", held for 23.538916ms
	W0408 10:46:59.983377    8409 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:46:59.990999    8409 out.go:177] 
	W0408 10:46:59.994900    8409 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:46:59.994988    8409 out.go:239] * 
	* 
	W0408 10:46:59.997922    8409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:47:00.004987    8409 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-529000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-529000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (35.11525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 node delete m03: exit status 83 (56.083125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-529000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-529000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr: exit status 7 (32.98125ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:47:00.209836    8423 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:47:00.210000    8423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:00.210004    8423 out.go:304] Setting ErrFile to fd 2...
	I0408 10:47:00.210006    8423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:00.210148    8423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:47:00.210270    8423 out.go:298] Setting JSON to false
	I0408 10:47:00.210281    8423 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:47:00.210338    8423 notify.go:220] Checking for updates...
	I0408 10:47:00.210499    8423 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:47:00.210506    8423 status.go:255] checking status of multinode-529000 ...
	I0408 10:47:00.210728    8423 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:47:00.210732    8423 status.go:343] host is not running, skipping remaining checks
	I0408 10:47:00.210734    8423 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.117625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-529000 stop: (3.661734625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status: exit status 7 (65.055ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr: exit status 7 (34.019917ms)

                                                
                                                
-- stdout --
	multinode-529000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:47:04.003581    8447 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:47:04.003736    8447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:04.003739    8447 out.go:304] Setting ErrFile to fd 2...
	I0408 10:47:04.003742    8447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:04.003859    8447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:47:04.003989    8447 out.go:298] Setting JSON to false
	I0408 10:47:04.004000    8447 mustload.go:65] Loading cluster: multinode-529000
	I0408 10:47:04.004057    8447 notify.go:220] Checking for updates...
	I0408 10:47:04.004194    8447 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:47:04.004203    8447 status.go:255] checking status of multinode-529000 ...
	I0408 10:47:04.004421    8447 status.go:330] multinode-529000 host status = "Stopped" (err=<nil>)
	I0408 10:47:04.004424    8447 status.go:343] host is not running, skipping remaining checks
	I0408 10:47:04.004426    8447 status.go:257] multinode-529000 status: &{Name:multinode-529000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr": multinode-529000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-529000 status --alsologtostderr": multinode-529000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (31.956917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-529000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-529000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.192338292s)

                                                
                                                
-- stdout --
	* [multinode-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-529000" primary control-plane node in "multinode-529000" cluster
	* Restarting existing qemu2 VM for "multinode-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:47:04.067246    8451 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:47:04.067409    8451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:04.067412    8451 out.go:304] Setting ErrFile to fd 2...
	I0408 10:47:04.067414    8451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:04.067535    8451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:47:04.068581    8451 out.go:298] Setting JSON to false
	I0408 10:47:04.084700    8451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6394,"bootTime":1712592030,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:47:04.084756    8451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:47:04.089329    8451 out.go:177] * [multinode-529000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:47:04.101199    8451 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:47:04.097222    8451 notify.go:220] Checking for updates...
	I0408 10:47:04.107172    8451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:47:04.111133    8451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:47:04.114175    8451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:47:04.117207    8451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:47:04.120167    8451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:47:04.123526    8451 config.go:182] Loaded profile config "multinode-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:47:04.123785    8451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:47:04.128066    8451 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:47:04.135180    8451 start.go:297] selected driver: qemu2
	I0408 10:47:04.135187    8451 start.go:901] validating driver "qemu2" against &{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:47:04.135260    8451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:47:04.137599    8451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:47:04.137659    8451 cni.go:84] Creating CNI manager for ""
	I0408 10:47:04.137666    8451 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 10:47:04.137724    8451 start.go:340] cluster config:
	{Name:multinode-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-529000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:47:04.142149    8451 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:04.149192    8451 out.go:177] * Starting "multinode-529000" primary control-plane node in "multinode-529000" cluster
	I0408 10:47:04.152138    8451 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:47:04.152155    8451 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:47:04.152165    8451 cache.go:56] Caching tarball of preloaded images
	I0408 10:47:04.152211    8451 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:47:04.152216    8451 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:47:04.152279    8451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/multinode-529000/config.json ...
	I0408 10:47:04.152734    8451 start.go:360] acquireMachinesLock for multinode-529000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:47:04.152759    8451 start.go:364] duration metric: took 19µs to acquireMachinesLock for "multinode-529000"
	I0408 10:47:04.152767    8451 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:47:04.152772    8451 fix.go:54] fixHost starting: 
	I0408 10:47:04.152887    8451 fix.go:112] recreateIfNeeded on multinode-529000: state=Stopped err=<nil>
	W0408 10:47:04.152896    8451 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:47:04.161085    8451 out.go:177] * Restarting existing qemu2 VM for "multinode-529000" ...
	I0408 10:47:04.165189    8451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f8:ff:71:b1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:47:04.167225    8451 main.go:141] libmachine: STDOUT: 
	I0408 10:47:04.167245    8451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:47:04.167272    8451 fix.go:56] duration metric: took 14.4975ms for fixHost
	I0408 10:47:04.167278    8451 start.go:83] releasing machines lock for "multinode-529000", held for 14.514458ms
	W0408 10:47:04.167283    8451 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:47:04.167316    8451 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:47:04.167321    8451 start.go:728] Will try again in 5 seconds ...
	I0408 10:47:09.169607    8451 start.go:360] acquireMachinesLock for multinode-529000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:47:09.169999    8451 start.go:364] duration metric: took 289.25µs to acquireMachinesLock for "multinode-529000"
	I0408 10:47:09.170129    8451 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:47:09.170149    8451 fix.go:54] fixHost starting: 
	I0408 10:47:09.170811    8451 fix.go:112] recreateIfNeeded on multinode-529000: state=Stopped err=<nil>
	W0408 10:47:09.170837    8451 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:47:09.176263    8451 out.go:177] * Restarting existing qemu2 VM for "multinode-529000" ...
	I0408 10:47:09.184445    8451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f8:ff:71:b1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/multinode-529000/disk.qcow2
	I0408 10:47:09.193595    8451 main.go:141] libmachine: STDOUT: 
	I0408 10:47:09.193668    8451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:47:09.193738    8451 fix.go:56] duration metric: took 23.594167ms for fixHost
	I0408 10:47:09.193757    8451 start.go:83] releasing machines lock for "multinode-529000", held for 23.732666ms
	W0408 10:47:09.193928    8451 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:47:09.201236    8451 out.go:177] 
	W0408 10:47:09.205151    8451 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:47:09.205214    8451 out.go:239] * 
	* 
	W0408 10:47:09.207634    8451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:47:09.216237    8451 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-529000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (70.736208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-529000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-529000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-529000-m01 --driver=qemu2 : exit status 80 (10.089141417s)

                                                
                                                
-- stdout --
	* [multinode-529000-m01] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-529000-m01" primary control-plane node in "multinode-529000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-529000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-529000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-529000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-529000-m02 --driver=qemu2 : exit status 80 (10.121403875s)

                                                
                                                
-- stdout --
	* [multinode-529000-m02] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-529000-m02" primary control-plane node in "multinode-529000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-529000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-529000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-529000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-529000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-529000: exit status 83 (83.0305ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-529000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-529000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-529000 -n multinode-529000: exit status 7 (32.538583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.47s)

                                                
                                    
x
+
TestPreload (10.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-494000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-494000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.872881709s)

                                                
                                                
-- stdout --
	* [test-preload-494000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-494000" primary control-plane node in "test-preload-494000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-494000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:47:29.938022    8508 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:47:29.938154    8508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:29.938157    8508 out.go:304] Setting ErrFile to fd 2...
	I0408 10:47:29.938160    8508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:47:29.938278    8508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:47:29.939399    8508 out.go:298] Setting JSON to false
	I0408 10:47:29.955645    8508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6419,"bootTime":1712592030,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:47:29.955717    8508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:47:29.962340    8508 out.go:177] * [test-preload-494000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:47:29.970348    8508 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:47:29.970408    8508 notify.go:220] Checking for updates...
	I0408 10:47:29.978300    8508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:47:29.981377    8508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:47:29.984302    8508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:47:29.987311    8508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:47:29.990299    8508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:47:29.993704    8508 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:47:29.993765    8508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:47:29.998284    8508 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:47:30.005322    8508 start.go:297] selected driver: qemu2
	I0408 10:47:30.005330    8508 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:47:30.005337    8508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:47:30.007786    8508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:47:30.011258    8508 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:47:30.014465    8508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:47:30.014511    8508 cni.go:84] Creating CNI manager for ""
	I0408 10:47:30.014519    8508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:47:30.014526    8508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:47:30.014568    8508 start.go:340] cluster config:
	{Name:test-preload-494000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:47:30.019241    8508 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.026329    8508 out.go:177] * Starting "test-preload-494000" primary control-plane node in "test-preload-494000" cluster
	I0408 10:47:30.030333    8508 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0408 10:47:30.030413    8508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/test-preload-494000/config.json ...
	I0408 10:47:30.030428    8508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/test-preload-494000/config.json: {Name:mk87944781e4abfd19b71d2827bbf10291f5b728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:47:30.030433    8508 cache.go:107] acquiring lock: {Name:mk85baeb762137470497570e9296584c4f360ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030440    8508 cache.go:107] acquiring lock: {Name:mk86460c8567dc87b6ddb31d3234a6d32a25adfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030478    8508 cache.go:107] acquiring lock: {Name:mk0d1251f413bc8953279bcb94471e6ac24c026d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030655    8508 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:47:30.030666    8508 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 10:47:30.030685    8508 start.go:360] acquireMachinesLock for test-preload-494000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:47:30.030684    8508 cache.go:107] acquiring lock: {Name:mka2773ce984b146f7455f5cc4f59c6e44438521 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030720    8508 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "test-preload-494000"
	I0408 10:47:30.030720    8508 cache.go:107] acquiring lock: {Name:mk8d97cead4b028c2d5641ea85c2352743dcd8b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030722    8508 cache.go:107] acquiring lock: {Name:mk309f863bccb752b239a0075e56675d183fa50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030733    8508 start.go:93] Provisioning new machine with config: &{Name:test-preload-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:47:30.030766    8508 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:47:30.030794    8508 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 10:47:30.035307    8508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:47:30.030775    8508 cache.go:107] acquiring lock: {Name:mkf9bd7ac777ee6faa894ba69224371b2ca3b8e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030813    8508 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 10:47:30.030702    8508 cache.go:107] acquiring lock: {Name:mk1a474ac23047d958501a51ea088770e346a5a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:47:30.030913    8508 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 10:47:30.031363    8508 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:47:30.035948    8508 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 10:47:30.035980    8508 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:47:30.041550    8508 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 10:47:30.041563    8508 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:47:30.041586    8508 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 10:47:30.045841    8508 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:47:30.045938    8508 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 10:47:30.046028    8508 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:47:30.046036    8508 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 10:47:30.046098    8508 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 10:47:30.053321    8508 start.go:159] libmachine.API.Create for "test-preload-494000" (driver="qemu2")
	I0408 10:47:30.053344    8508 client.go:168] LocalClient.Create starting
	I0408 10:47:30.053423    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:47:30.053453    8508 main.go:141] libmachine: Decoding PEM data...
	I0408 10:47:30.053467    8508 main.go:141] libmachine: Parsing certificate...
	I0408 10:47:30.053512    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:47:30.053534    8508 main.go:141] libmachine: Decoding PEM data...
	I0408 10:47:30.053540    8508 main.go:141] libmachine: Parsing certificate...
	I0408 10:47:30.053898    8508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:47:30.198959    8508 main.go:141] libmachine: Creating SSH key...
	I0408 10:47:30.310924    8508 main.go:141] libmachine: Creating Disk image...
	I0408 10:47:30.310953    8508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:47:30.311240    8508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2
	I0408 10:47:30.324408    8508 main.go:141] libmachine: STDOUT: 
	I0408 10:47:30.324431    8508 main.go:141] libmachine: STDERR: 
	I0408 10:47:30.324480    8508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2 +20000M
	I0408 10:47:30.336834    8508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:47:30.336851    8508 main.go:141] libmachine: STDERR: 
	I0408 10:47:30.336864    8508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2
	I0408 10:47:30.336872    8508 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:47:30.336906    8508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f6:07:e6:a2:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2
	I0408 10:47:30.338970    8508 main.go:141] libmachine: STDOUT: 
	I0408 10:47:30.338985    8508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:47:30.339006    8508 client.go:171] duration metric: took 285.654167ms to LocalClient.Create
	I0408 10:47:30.440097    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0408 10:47:30.457893    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 10:47:30.473035    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0408 10:47:30.480910    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0408 10:47:30.497062    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0408 10:47:30.510304    8508 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 10:47:30.510333    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 10:47:30.534398    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 10:47:30.660726    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0408 10:47:30.660798    8508 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 630.142166ms
	I0408 10:47:30.660838    8508 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0408 10:47:30.685275    8508 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 10:47:30.685383    8508 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 10:47:30.873443    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0408 10:47:30.873515    8508 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 843.073041ms
	I0408 10:47:30.873545    8508 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0408 10:47:31.773998    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0408 10:47:31.774043    8508 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 1.743320459s
	I0408 10:47:31.774096    8508 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0408 10:47:31.839048    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0408 10:47:31.839092    8508 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.808370334s
	I0408 10:47:31.839117    8508 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0408 10:47:32.339293    8508 start.go:128] duration metric: took 2.308488417s to createHost
	I0408 10:47:32.339342    8508 start.go:83] releasing machines lock for "test-preload-494000", held for 2.30859675s
	W0408 10:47:32.339388    8508 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:47:32.360782    8508 out.go:177] * Deleting "test-preload-494000" in qemu2 ...
	W0408 10:47:32.393624    8508 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:47:32.393656    8508 start.go:728] Will try again in 5 seconds ...
	I0408 10:47:34.451406    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0408 10:47:34.451453    8508 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.420986375s
	I0408 10:47:34.451508    8508 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0408 10:47:35.135939    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0408 10:47:35.135983    8508 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.105491083s
	I0408 10:47:35.136006    8508 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0408 10:47:36.278764    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0408 10:47:36.278811    8508 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.248044458s
	I0408 10:47:36.278836    8508 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0408 10:47:37.395881    8508 start.go:360] acquireMachinesLock for test-preload-494000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:47:37.396245    8508 start.go:364] duration metric: took 304.083µs to acquireMachinesLock for "test-preload-494000"
	I0408 10:47:37.396365    8508 start.go:93] Provisioning new machine with config: &{Name:test-preload-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:47:37.396593    8508 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:47:37.408174    8508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:47:37.455463    8508 start.go:159] libmachine.API.Create for "test-preload-494000" (driver="qemu2")
	I0408 10:47:37.455512    8508 client.go:168] LocalClient.Create starting
	I0408 10:47:37.455630    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:47:37.455693    8508 main.go:141] libmachine: Decoding PEM data...
	I0408 10:47:37.455715    8508 main.go:141] libmachine: Parsing certificate...
	I0408 10:47:37.455774    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:47:37.455819    8508 main.go:141] libmachine: Decoding PEM data...
	I0408 10:47:37.455836    8508 main.go:141] libmachine: Parsing certificate...
	I0408 10:47:37.456360    8508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:47:37.611941    8508 main.go:141] libmachine: Creating SSH key...
	I0408 10:47:37.698219    8508 main.go:141] libmachine: Creating Disk image...
	I0408 10:47:37.698224    8508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:47:37.698479    8508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2
	I0408 10:47:37.711192    8508 main.go:141] libmachine: STDOUT: 
	I0408 10:47:37.711217    8508 main.go:141] libmachine: STDERR: 
	I0408 10:47:37.711280    8508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2 +20000M
	I0408 10:47:37.722367    8508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:47:37.722382    8508 main.go:141] libmachine: STDERR: 
	I0408 10:47:37.722398    8508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2
	I0408 10:47:37.722403    8508 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:47:37.722439    8508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:70:1a:f0:0f:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/test-preload-494000/disk.qcow2
	I0408 10:47:37.724239    8508 main.go:141] libmachine: STDOUT: 
	I0408 10:47:37.724254    8508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:47:37.724268    8508 client.go:171] duration metric: took 268.749125ms to LocalClient.Create
	I0408 10:47:38.053206    8508 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0408 10:47:38.053319    8508 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.022581333s
	I0408 10:47:38.053346    8508 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0408 10:47:38.053392    8508 cache.go:87] Successfully saved all images to host disk.
	I0408 10:47:39.726490    8508 start.go:128] duration metric: took 2.329851167s to createHost
	I0408 10:47:39.726541    8508 start.go:83] releasing machines lock for "test-preload-494000", held for 2.330253416s
	W0408 10:47:39.726878    8508 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:47:39.744655    8508 out.go:177] 
	W0408 10:47:39.748457    8508 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:47:39.748485    8508 out.go:239] * 
	* 
	W0408 10:47:39.751109    8508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:47:39.765559    8508 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-494000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-04-08 10:47:39.783423 -0700 PDT m=+728.666983376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-494000 -n test-preload-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-494000 -n test-preload-494000: exit status 7 (68.345583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-494000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-494000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-494000
--- FAIL: TestPreload (10.05s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-009000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-009000 --memory=2048 --driver=qemu2 : exit status 80 (9.85034975s)

                                                
                                                
-- stdout --
	* [scheduled-stop-009000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-009000" primary control-plane node in "scheduled-stop-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-009000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-009000" primary control-plane node in "scheduled-stop-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-08 10:47:49.81017 -0700 PDT m=+738.693664209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-009000 -n scheduled-stop-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-009000 -n scheduled-stop-009000: exit status 7 (72.603667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-009000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-009000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (12.13s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2635846635 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-023000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-023000 --memory=2600 --driver=qemu2 : exit status 80 (9.844436625s)

                                                
                                                
-- stdout --
	* [skaffold-023000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-023000" primary control-plane node in "skaffold-023000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-023000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-023000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-023000" primary control-plane node in "skaffold-023000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-023000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-04-08 10:48:01.944396 -0700 PDT m=+750.827810501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-023000 -n skaffold-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-023000 -n skaffold-023000: exit status 7 (65.66975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-023000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-023000
--- FAIL: TestSkaffold (12.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.809302510 start -p running-upgrade-603000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.809302510 start -p running-upgrade-603000 --memory=2200 --vm-driver=qemu2 : (49.451024708s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-603000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-603000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.062523208s)

                                                
                                                
-- stdout --
	* [running-upgrade-603000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-603000" primary control-plane node in "running-upgrade-603000" cluster
	* Updating the running qemu2 "running-upgrade-603000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:49:32.956679    8917 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:49:32.956861    8917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:49:32.956866    8917 out.go:304] Setting ErrFile to fd 2...
	I0408 10:49:32.956869    8917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:49:32.957001    8917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:49:32.958197    8917 out.go:298] Setting JSON to false
	I0408 10:49:32.978079    8917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6542,"bootTime":1712592030,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:49:32.978161    8917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:49:32.983252    8917 out.go:177] * [running-upgrade-603000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:49:32.995149    8917 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:49:32.991360    8917 notify.go:220] Checking for updates...
	I0408 10:49:32.999277    8917 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:49:33.003021    8917 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:49:33.006151    8917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:49:33.009141    8917 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:49:33.012184    8917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:49:33.015507    8917 config.go:182] Loaded profile config "running-upgrade-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:49:33.018138    8917 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 10:49:33.021175    8917 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:49:33.025147    8917 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:49:33.032019    8917 start.go:297] selected driver: qemu2
	I0408 10:49:33.032026    8917 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:49:33.032077    8917 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:49:33.035481    8917 cni.go:84] Creating CNI manager for ""
	I0408 10:49:33.035497    8917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:49:33.035520    8917 start.go:340] cluster config:
	{Name:running-upgrade-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:49:33.035567    8917 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:49:33.043153    8917 out.go:177] * Starting "running-upgrade-603000" primary control-plane node in "running-upgrade-603000" cluster
	I0408 10:49:33.047149    8917 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 10:49:33.047165    8917 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0408 10:49:33.047173    8917 cache.go:56] Caching tarball of preloaded images
	I0408 10:49:33.047227    8917 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:49:33.047232    8917 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0408 10:49:33.047269    8917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/config.json ...
	I0408 10:49:33.047765    8917 start.go:360] acquireMachinesLock for running-upgrade-603000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:49:33.047804    8917 start.go:364] duration metric: took 32.667µs to acquireMachinesLock for "running-upgrade-603000"
	I0408 10:49:33.047814    8917 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:49:33.047819    8917 fix.go:54] fixHost starting: 
	I0408 10:49:33.048469    8917 fix.go:112] recreateIfNeeded on running-upgrade-603000: state=Running err=<nil>
	W0408 10:49:33.048480    8917 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:49:33.056167    8917 out.go:177] * Updating the running qemu2 "running-upgrade-603000" VM ...
	I0408 10:49:33.060151    8917 machine.go:94] provisionDockerMachine start ...
	I0408 10:49:33.060186    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.060284    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.060289    8917 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 10:49:33.124996    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-603000
	
	I0408 10:49:33.125012    8917 buildroot.go:166] provisioning hostname "running-upgrade-603000"
	I0408 10:49:33.125054    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.125168    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.125174    8917 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-603000 && echo "running-upgrade-603000" | sudo tee /etc/hostname
	I0408 10:49:33.191646    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-603000
	
	I0408 10:49:33.191698    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.191797    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.191807    8917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-603000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-603000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-603000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 10:49:33.254892    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 10:49:33.254904    8917 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18585-6624/.minikube CaCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18585-6624/.minikube}
	I0408 10:49:33.254917    8917 buildroot.go:174] setting up certificates
	I0408 10:49:33.254923    8917 provision.go:84] configureAuth start
	I0408 10:49:33.254926    8917 provision.go:143] copyHostCerts
	I0408 10:49:33.255015    8917 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem, removing ...
	I0408 10:49:33.255023    8917 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem
	I0408 10:49:33.255134    8917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem (1082 bytes)
	I0408 10:49:33.255311    8917 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem, removing ...
	I0408 10:49:33.255315    8917 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem
	I0408 10:49:33.255356    8917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem (1123 bytes)
	I0408 10:49:33.255452    8917 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem, removing ...
	I0408 10:49:33.255456    8917 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem
	I0408 10:49:33.255491    8917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem (1675 bytes)
	I0408 10:49:33.255577    8917 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-603000 san=[127.0.0.1 localhost minikube running-upgrade-603000]
	I0408 10:49:33.294894    8917 provision.go:177] copyRemoteCerts
	I0408 10:49:33.294934    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 10:49:33.294942    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:49:33.328353    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 10:49:33.335037    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 10:49:33.341615    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 10:49:33.348638    8917 provision.go:87] duration metric: took 93.703792ms to configureAuth
	I0408 10:49:33.348648    8917 buildroot.go:189] setting minikube options for container-runtime
	I0408 10:49:33.348755    8917 config.go:182] Loaded profile config "running-upgrade-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:49:33.348786    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.348868    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.348877    8917 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 10:49:33.413278    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 10:49:33.413287    8917 buildroot.go:70] root file system type: tmpfs
	I0408 10:49:33.413341    8917 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 10:49:33.413395    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.413503    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.413538    8917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 10:49:33.478900    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 10:49:33.478951    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.479048    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.479057    8917 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 10:49:33.543327    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 10:49:33.543339    8917 machine.go:97] duration metric: took 483.179875ms to provisionDockerMachine
	I0408 10:49:33.543345    8917 start.go:293] postStartSetup for "running-upgrade-603000" (driver="qemu2")
	I0408 10:49:33.543352    8917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 10:49:33.543404    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 10:49:33.543413    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:49:33.577484    8917 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 10:49:33.578718    8917 info.go:137] Remote host: Buildroot 2021.02.12
	I0408 10:49:33.578730    8917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18585-6624/.minikube/addons for local assets ...
	I0408 10:49:33.578796    8917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18585-6624/.minikube/files for local assets ...
	I0408 10:49:33.578886    8917 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem -> 70432.pem in /etc/ssl/certs
	I0408 10:49:33.579026    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 10:49:33.581673    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem --> /etc/ssl/certs/70432.pem (1708 bytes)
	I0408 10:49:33.588855    8917 start.go:296] duration metric: took 45.502959ms for postStartSetup
	I0408 10:49:33.588869    8917 fix.go:56] duration metric: took 541.047959ms for fixHost
	I0408 10:49:33.588904    8917 main.go:141] libmachine: Using SSH client type: native
	I0408 10:49:33.589012    8917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d59c80] 0x104d5c4e0 <nil>  [] 0s} localhost 51256 <nil> <nil>}
	I0408 10:49:33.589016    8917 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 10:49:33.656912    8917 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712598573.353648138
	
	I0408 10:49:33.656922    8917 fix.go:216] guest clock: 1712598573.353648138
	I0408 10:49:33.656926    8917 fix.go:229] Guest: 2024-04-08 10:49:33.353648138 -0700 PDT Remote: 2024-04-08 10:49:33.588871 -0700 PDT m=+0.655146834 (delta=-235.222862ms)
	I0408 10:49:33.656941    8917 fix.go:200] guest clock delta is within tolerance: -235.222862ms
	I0408 10:49:33.656944    8917 start.go:83] releasing machines lock for "running-upgrade-603000", held for 609.129583ms
	I0408 10:49:33.657015    8917 ssh_runner.go:195] Run: cat /version.json
	I0408 10:49:33.657027    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:49:33.657037    8917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 10:49:33.657054    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	W0408 10:49:33.657590    8917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51365->127.0.0.1:51256: write: broken pipe
	I0408 10:49:33.657605    8917 retry.go:31] will retry after 127.576616ms: ssh: handshake failed: write tcp 127.0.0.1:51365->127.0.0.1:51256: write: broken pipe
	W0408 10:49:33.821825    8917 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0408 10:49:33.821908    8917 ssh_runner.go:195] Run: systemctl --version
	I0408 10:49:33.823841    8917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 10:49:33.825558    8917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 10:49:33.825586    8917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0408 10:49:33.828432    8917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0408 10:49:33.832722    8917 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 10:49:33.832731    8917 start.go:494] detecting cgroup driver to use...
	I0408 10:49:33.832835    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 10:49:33.838229    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0408 10:49:33.841125    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 10:49:33.844604    8917 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 10:49:33.844624    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 10:49:33.847349    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 10:49:33.850124    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 10:49:33.853048    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 10:49:33.856257    8917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 10:49:33.859236    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 10:49:33.862059    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 10:49:33.865048    8917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 10:49:33.868604    8917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 10:49:33.871516    8917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 10:49:33.874045    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:49:33.966142    8917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 10:49:33.977028    8917 start.go:494] detecting cgroup driver to use...
	I0408 10:49:33.977103    8917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 10:49:33.983403    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 10:49:33.990756    8917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 10:49:33.998460    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 10:49:34.002773    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 10:49:34.007233    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 10:49:34.012535    8917 ssh_runner.go:195] Run: which cri-dockerd
	I0408 10:49:34.013808    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 10:49:34.016380    8917 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0408 10:49:34.021181    8917 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 10:49:34.116351    8917 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 10:49:34.208078    8917 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 10:49:34.208137    8917 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 10:49:34.213234    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:49:34.305003    8917 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 10:49:37.713106    8917 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.408065625s)
	I0408 10:49:37.713179    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 10:49:37.720774    8917 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0408 10:49:37.730887    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 10:49:37.735884    8917 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 10:49:37.804873    8917 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 10:49:37.885753    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:49:37.970111    8917 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 10:49:37.976525    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 10:49:37.980888    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:49:38.067936    8917 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 10:49:38.112287    8917 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 10:49:38.112358    8917 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 10:49:38.114480    8917 start.go:562] Will wait 60s for crictl version
	I0408 10:49:38.114546    8917 ssh_runner.go:195] Run: which crictl
	I0408 10:49:38.116065    8917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 10:49:38.128560    8917 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0408 10:49:38.128628    8917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 10:49:38.142943    8917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 10:49:38.163220    8917 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0408 10:49:38.163346    8917 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0408 10:49:38.164737    8917 kubeadm.go:877] updating cluster {Name:running-upgrade-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0408 10:49:38.164777    8917 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 10:49:38.164816    8917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 10:49:38.179863    8917 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 10:49:38.179871    8917 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 10:49:38.179914    8917 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 10:49:38.182845    8917 ssh_runner.go:195] Run: which lz4
	I0408 10:49:38.184042    8917 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 10:49:38.185178    8917 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 10:49:38.185188    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0408 10:49:38.839396    8917 docker.go:649] duration metric: took 655.384125ms to copy over tarball
	I0408 10:49:38.839452    8917 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 10:49:39.935529    8917 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.096056458s)
	I0408 10:49:39.935544    8917 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 10:49:39.951004    8917 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 10:49:39.953734    8917 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0408 10:49:39.958793    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:49:40.041973    8917 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 10:49:41.779680    8917 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.737679333s)
	I0408 10:49:41.779768    8917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 10:49:41.806684    8917 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 10:49:41.806694    8917 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 10:49:41.806699    8917 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 10:49:41.814302    8917 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:49:41.814471    8917 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:49:41.814491    8917 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:49:41.814648    8917 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:49:41.814677    8917 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 10:49:41.814736    8917 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:49:41.815232    8917 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:49:41.816008    8917 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:49:41.824704    8917 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:49:41.824761    8917 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:49:41.824883    8917 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:49:41.824930    8917 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:49:41.825023    8917 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:49:41.825127    8917 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:49:41.825292    8917 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:49:41.825561    8917 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 10:49:42.207256    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:49:42.213932    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:49:42.228430    8917 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0408 10:49:42.228454    8917 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:49:42.228508    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:49:42.236409    8917 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0408 10:49:42.236426    8917 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:49:42.236469    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:49:42.243797    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:49:42.251939    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:49:42.253552    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0408 10:49:42.253967    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0408 10:49:42.255772    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 10:49:42.261759    8917 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0408 10:49:42.261781    8917 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:49:42.261836    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0408 10:49:42.267125    8917 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 10:49:42.267262    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:49:42.268534    8917 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0408 10:49:42.268550    8917 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:49:42.268586    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:49:42.277768    8917 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0408 10:49:42.277789    8917 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:49:42.277838    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0408 10:49:42.283170    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0408 10:49:42.284868    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 10:49:42.296092    8917 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0408 10:49:42.296113    8917 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:49:42.296166    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:49:42.303188    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0408 10:49:42.305443    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 10:49:42.306406    8917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0408 10:49:42.308835    8917 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0408 10:49:42.308854    8917 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0408 10:49:42.308895    8917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0408 10:49:42.318886    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 10:49:42.319026    8917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0408 10:49:42.319200    8917 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0408 10:49:42.319214    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0408 10:49:42.328698    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 10:49:42.328702    8917 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0408 10:49:42.328725    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0408 10:49:42.328799    8917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0408 10:49:42.342102    8917 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0408 10:49:42.342140    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0408 10:49:42.370066    8917 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 10:49:42.370081    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0408 10:49:42.469902    8917 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0408 10:49:42.469926    8917 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 10:49:42.469933    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0408 10:49:42.525365    8917 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 10:49:42.525479    8917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:49:42.556171    8917 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 10:49:42.556220    8917 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0408 10:49:42.556243    8917 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:49:42.556306    8917 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:49:42.613496    8917 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 10:49:42.613509    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0408 10:49:43.576140    8917 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.019789542s)
	I0408 10:49:43.576168    8917 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 10:49:43.576179    8917 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 10:49:43.576425    8917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0408 10:49:43.580302    8917 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0408 10:49:43.580335    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0408 10:49:43.636286    8917 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 10:49:43.636312    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0408 10:49:43.870293    8917 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 10:49:43.870334    8917 cache_images.go:92] duration metric: took 2.063614583s to LoadCachedImages
	W0408 10:49:43.870375    8917 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0408 10:49:43.870385    8917 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0408 10:49:43.870445    8917 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 10:49:43.870506    8917 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 10:49:43.884821    8917 cni.go:84] Creating CNI manager for ""
	I0408 10:49:43.884832    8917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:49:43.884840    8917 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 10:49:43.884849    8917 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-603000 NodeName:running-upgrade-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 10:49:43.884914    8917 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-603000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 10:49:43.884970    8917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0408 10:49:43.888351    8917 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 10:49:43.888392    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 10:49:43.891116    8917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0408 10:49:43.895972    8917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 10:49:43.901004    8917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0408 10:49:43.906383    8917 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0408 10:49:43.907654    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:49:43.989362    8917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:49:43.995004    8917 certs.go:68] Setting up /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000 for IP: 10.0.2.15
	I0408 10:49:43.995010    8917 certs.go:194] generating shared ca certs ...
	I0408 10:49:43.995018    8917 certs.go:226] acquiring lock for ca certs: {Name:mkfcdee1cac51c6f74fa377d8d75e68d66123e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:49:43.996481    8917 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.key
	I0408 10:49:43.996518    8917 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.key
	I0408 10:49:43.996523    8917 certs.go:256] generating profile certs ...
	I0408 10:49:43.996582    8917 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.key
	I0408 10:49:43.996594    8917 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.key.d639bbe4
	I0408 10:49:43.996607    8917 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.crt.d639bbe4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0408 10:49:44.110946    8917 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.crt.d639bbe4 ...
	I0408 10:49:44.110952    8917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.crt.d639bbe4: {Name:mk44b7d996931a9438a4f1e2769711fc273bcdbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:49:44.111193    8917 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.key.d639bbe4 ...
	I0408 10:49:44.111198    8917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.key.d639bbe4: {Name:mk763a6501ef520463e31067584099831780222f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:49:44.111328    8917 certs.go:381] copying /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.crt.d639bbe4 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.crt
	I0408 10:49:44.111441    8917 certs.go:385] copying /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.key.d639bbe4 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.key
	I0408 10:49:44.111554    8917 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/proxy-client.key
	I0408 10:49:44.111671    8917 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043.pem (1338 bytes)
	W0408 10:49:44.111694    8917 certs.go:480] ignoring /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043_empty.pem, impossibly tiny 0 bytes
	I0408 10:49:44.111699    8917 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 10:49:44.111716    8917 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem (1082 bytes)
	I0408 10:49:44.111733    8917 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem (1123 bytes)
	I0408 10:49:44.111750    8917 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem (1675 bytes)
	I0408 10:49:44.111785    8917 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem (1708 bytes)
	I0408 10:49:44.112091    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 10:49:44.119399    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 10:49:44.126111    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 10:49:44.133119    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 10:49:44.140511    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 10:49:44.147361    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 10:49:44.153942    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 10:49:44.162080    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 10:49:44.169008    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043.pem --> /usr/share/ca-certificates/7043.pem (1338 bytes)
	I0408 10:49:44.176334    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem --> /usr/share/ca-certificates/70432.pem (1708 bytes)
	I0408 10:49:44.183231    8917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 10:49:44.189978    8917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 10:49:44.194824    8917 ssh_runner.go:195] Run: openssl version
	I0408 10:49:44.196619    8917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7043.pem && ln -fs /usr/share/ca-certificates/7043.pem /etc/ssl/certs/7043.pem"
	I0408 10:49:44.200141    8917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7043.pem
	I0408 10:49:44.201583    8917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 17:36 /usr/share/ca-certificates/7043.pem
	I0408 10:49:44.201604    8917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7043.pem
	I0408 10:49:44.203527    8917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7043.pem /etc/ssl/certs/51391683.0"
	I0408 10:49:44.206190    8917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70432.pem && ln -fs /usr/share/ca-certificates/70432.pem /etc/ssl/certs/70432.pem"
	I0408 10:49:44.209407    8917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70432.pem
	I0408 10:49:44.210888    8917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 17:36 /usr/share/ca-certificates/70432.pem
	I0408 10:49:44.210909    8917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70432.pem
	I0408 10:49:44.212695    8917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70432.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 10:49:44.216097    8917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 10:49:44.219187    8917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:49:44.220665    8917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 17:49 /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:49:44.220685    8917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:49:44.222587    8917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 10:49:44.225368    8917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 10:49:44.227049    8917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 10:49:44.228776    8917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 10:49:44.230688    8917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 10:49:44.232403    8917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 10:49:44.234434    8917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 10:49:44.236161    8917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 10:49:44.238025    8917 kubeadm.go:391] StartCluster: {Name:running-upgrade-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51288 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:49:44.238090    8917 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 10:49:44.248535    8917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 10:49:44.252154    8917 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 10:49:44.252160    8917 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 10:49:44.252163    8917 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 10:49:44.252186    8917 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 10:49:44.255878    8917 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:49:44.255915    8917 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-603000" does not appear in /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:49:44.255934    8917 kubeconfig.go:62] /Users/jenkins/minikube-integration/18585-6624/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-603000" cluster setting kubeconfig missing "running-upgrade-603000" context setting]
	I0408 10:49:44.256104    8917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:49:44.256698    8917 kapi.go:59] client config for running-upgrade-603000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604fa70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:49:44.258538    8917 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 10:49:44.261536    8917 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-603000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0408 10:49:44.261541    8917 kubeadm.go:1154] stopping kube-system containers ...
	I0408 10:49:44.261583    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 10:49:44.272915    8917 docker.go:483] Stopping containers: [a618d0335092 47f55eb76f59 ace76edac2a1 b7b23e0e0b62 80da0ca46341 51867dc58ea1 ee24f73d112b 627ccac08839 1df516cfd59e 4decebdff654 baf39442e92d 5788cd97c70d 65df13c6a0c5 0bbb8830229e]
	I0408 10:49:44.272980    8917 ssh_runner.go:195] Run: docker stop a618d0335092 47f55eb76f59 ace76edac2a1 b7b23e0e0b62 80da0ca46341 51867dc58ea1 ee24f73d112b 627ccac08839 1df516cfd59e 4decebdff654 baf39442e92d 5788cd97c70d 65df13c6a0c5 0bbb8830229e
	I0408 10:49:44.285080    8917 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 10:49:44.372301    8917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:49:44.376849    8917 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Apr  8 17:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Apr  8 17:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Apr  8 17:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Apr  8 17:49 /etc/kubernetes/scheduler.conf
	
	I0408 10:49:44.376884    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/admin.conf
	I0408 10:49:44.380584    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:49:44.380620    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:49:44.384317    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/kubelet.conf
	I0408 10:49:44.387533    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:49:44.387562    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:49:44.390524    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/controller-manager.conf
	I0408 10:49:44.393560    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:49:44.393589    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:49:44.396563    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/scheduler.conf
	I0408 10:49:44.399325    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:49:44.399346    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:49:44.401831    8917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:49:44.404908    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:49:44.426548    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:49:44.794155    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:49:45.003763    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:49:45.045601    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:49:45.087470    8917 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:49:45.087544    8917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:49:45.589721    8917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:49:46.089641    8917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:49:46.093849    8917 api_server.go:72] duration metric: took 1.006374334s to wait for apiserver process to appear ...
	I0408 10:49:46.093857    8917 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:49:46.093866    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:49:51.096023    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:49:51.096077    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:49:56.096530    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:49:56.096607    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:01.097611    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:01.097702    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:06.098684    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:06.098767    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:11.100265    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:11.100352    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:16.102154    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:16.102234    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:21.104570    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:21.104660    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:26.107311    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:26.107402    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:31.109313    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:31.109393    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:36.112058    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:36.112148    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:41.114755    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:41.114887    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:46.117540    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:46.118046    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:50:46.160288    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:50:46.160424    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:50:46.182033    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:50:46.182146    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:50:46.197359    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:50:46.197461    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:50:46.209741    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:50:46.209815    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:50:46.220688    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:50:46.220765    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:50:46.231033    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:50:46.231101    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:50:46.241086    8917 logs.go:276] 0 containers: []
	W0408 10:50:46.241098    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:50:46.241153    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:50:46.252982    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:50:46.253005    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:50:46.253014    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:50:46.269662    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:50:46.269674    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:50:46.281377    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:50:46.281392    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:50:46.293429    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:50:46.293441    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:50:46.364664    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:50:46.364677    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:50:46.389441    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:50:46.389454    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:50:46.403703    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:50:46.403714    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:50:46.415496    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:50:46.415509    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:50:46.433832    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:50:46.433846    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:50:46.445306    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:50:46.445320    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:50:46.469791    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:50:46.469802    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:50:46.483494    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:50:46.483504    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:50:46.506380    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:50:46.506391    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:50:46.523464    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:50:46.523474    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:50:46.542849    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:50:46.542859    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:50:46.582594    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:50:46.582605    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:50:46.587444    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:50:46.587455    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:50:49.108107    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:50:54.110847    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:50:54.111051    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:50:54.130274    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:50:54.130363    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:50:54.144547    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:50:54.144622    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:50:54.156713    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:50:54.156785    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:50:54.166892    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:50:54.166964    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:50:54.177157    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:50:54.177219    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:50:54.187783    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:50:54.187855    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:50:54.198112    8917 logs.go:276] 0 containers: []
	W0408 10:50:54.198124    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:50:54.198182    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:50:54.209321    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:50:54.209340    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:50:54.209345    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:50:54.232706    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:50:54.232718    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:50:54.243575    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:50:54.243585    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:50:54.256843    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:50:54.256855    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:50:54.281343    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:50:54.281352    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:50:54.294030    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:50:54.294041    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:50:54.332904    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:50:54.332912    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:50:54.350673    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:50:54.350686    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:50:54.368124    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:50:54.368136    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:50:54.379459    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:50:54.379471    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:50:54.390599    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:50:54.390610    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:50:54.425614    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:50:54.425624    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:50:54.437256    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:50:54.437267    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:50:54.441659    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:50:54.441666    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:50:54.455425    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:50:54.455433    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:50:54.469582    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:50:54.469593    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:50:54.480538    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:50:54.480548    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:50:56.999574    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:02.002488    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:02.002956    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:02.045497    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:02.045628    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:02.070600    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:02.070695    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:02.084270    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:02.084334    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:02.096127    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:02.096186    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:02.106557    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:02.106630    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:02.117040    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:02.117114    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:02.131385    8917 logs.go:276] 0 containers: []
	W0408 10:51:02.131398    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:02.131458    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:02.142379    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:02.142395    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:02.142400    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:02.153553    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:02.153564    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:02.169233    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:02.169242    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:02.207065    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:02.207078    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:02.224613    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:02.224624    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:02.250543    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:02.250554    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:02.264622    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:02.264633    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:02.276849    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:02.276862    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:02.296727    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:02.296739    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:02.308451    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:02.308461    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:02.320045    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:02.320054    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:02.341113    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:02.341126    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:02.352837    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:02.352848    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:02.367143    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:02.367154    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:02.381448    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:02.381457    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:02.393348    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:02.393359    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:02.397982    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:02.397991    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:04.937018    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:09.939515    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:09.939734    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:09.969116    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:09.969195    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:09.981562    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:09.981666    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:09.992714    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:09.992775    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:10.003306    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:10.003376    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:10.013155    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:10.013225    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:10.025963    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:10.026035    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:10.036138    8917 logs.go:276] 0 containers: []
	W0408 10:51:10.036156    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:10.036214    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:10.046134    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:10.046150    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:10.046166    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:10.057649    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:10.057662    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:10.075840    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:10.075852    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:10.112511    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:10.112518    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:10.128336    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:10.128350    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:10.142101    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:10.142114    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:10.146819    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:10.146825    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:10.170949    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:10.170959    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:10.186973    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:10.186985    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:10.204683    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:10.204695    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:10.215928    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:10.215940    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:10.227426    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:10.227437    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:10.238517    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:10.238529    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:10.277452    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:10.277461    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:10.291791    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:10.291806    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:10.306737    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:10.306747    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:10.325044    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:10.325056    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:12.852193    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:17.855056    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:17.855545    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:17.895596    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:17.895725    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:17.917228    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:17.917342    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:17.934229    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:17.934306    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:17.947559    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:17.947649    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:17.958322    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:17.961379    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:17.975025    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:17.975100    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:17.985480    8917 logs.go:276] 0 containers: []
	W0408 10:51:17.985492    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:17.985549    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:18.002067    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:18.002086    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:18.002090    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:18.040468    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:18.040476    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:18.062897    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:18.062912    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:18.077020    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:18.077037    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:18.088999    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:18.089017    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:18.101629    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:18.101641    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:18.112978    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:18.112992    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:18.124498    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:18.124507    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:18.149614    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:18.149622    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:18.153610    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:18.153619    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:18.167470    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:18.167484    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:18.184945    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:18.184957    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:18.196693    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:18.196707    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:18.214734    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:18.214747    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:18.226128    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:18.226138    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:18.237543    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:18.237555    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:18.273114    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:18.273125    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:20.790153    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:25.792898    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:25.793347    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:25.831270    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:25.831410    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:25.853108    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:25.853228    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:25.867938    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:25.868017    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:25.880393    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:25.880466    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:25.891248    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:25.891318    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:25.901748    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:25.901818    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:25.912118    8917 logs.go:276] 0 containers: []
	W0408 10:51:25.912132    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:25.912192    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:25.922864    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:25.922884    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:25.922889    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:25.935087    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:25.935100    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:25.974330    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:25.974338    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:26.008690    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:26.008702    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:26.022770    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:26.022780    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:26.034809    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:26.034822    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:26.060062    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:26.060070    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:26.064315    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:26.064323    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:26.082407    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:26.082416    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:26.101125    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:26.101138    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:26.113832    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:26.113842    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:26.128976    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:26.128987    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:26.140218    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:26.140227    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:26.151968    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:26.151981    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:26.176479    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:26.176492    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:26.187741    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:26.187755    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:26.199167    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:26.199179    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:28.718333    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:33.721050    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:33.721430    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:33.748971    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:33.749087    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:33.768107    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:33.768185    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:33.782011    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:33.782068    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:33.793548    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:33.793624    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:33.803533    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:33.803605    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:33.814381    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:33.814457    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:33.824556    8917 logs.go:276] 0 containers: []
	W0408 10:51:33.824567    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:33.824615    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:33.835691    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:33.835709    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:33.835713    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:33.840469    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:33.840478    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:33.857909    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:33.857919    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:33.869631    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:33.869643    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:33.880528    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:33.880539    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:33.919896    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:33.919911    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:33.933675    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:33.933686    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:33.945137    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:33.945149    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:33.970747    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:33.970755    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:33.988928    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:33.988938    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:34.000179    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:34.000190    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:34.016046    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:34.016057    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:34.051694    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:34.051706    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:34.072520    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:34.072529    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:34.096162    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:34.096175    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:34.107340    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:34.107353    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:34.121614    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:34.121628    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:36.635799    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:41.638633    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:41.639075    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:41.680662    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:41.680807    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:41.702323    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:41.702428    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:41.717652    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:41.717725    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:41.734755    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:41.734832    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:41.747006    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:41.747072    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:41.757780    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:41.757843    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:41.773170    8917 logs.go:276] 0 containers: []
	W0408 10:51:41.773183    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:41.773238    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:41.784549    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:41.784573    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:41.784579    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:41.789343    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:41.789349    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:41.807134    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:41.807143    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:41.821417    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:41.821429    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:41.834290    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:41.834305    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:41.849183    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:41.849194    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:41.861569    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:41.861580    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:41.873537    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:41.873548    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:41.910966    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:41.910974    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:41.948740    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:41.948754    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:41.973778    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:41.973787    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:41.990942    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:41.990952    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:42.015839    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:42.015846    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:42.028288    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:42.028300    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:42.042853    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:42.042864    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:42.053906    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:42.053918    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:42.065379    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:42.065393    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:44.578897    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:49.581712    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:49.582120    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:49.615267    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:49.615416    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:49.634808    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:49.634930    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:49.649834    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:49.649910    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:49.661665    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:49.661740    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:49.678834    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:49.678898    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:49.689073    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:49.689131    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:49.699173    8917 logs.go:276] 0 containers: []
	W0408 10:51:49.699184    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:49.699245    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:49.709486    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:49.709505    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:49.709510    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:49.751428    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:49.751435    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:49.755411    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:49.755417    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:49.768882    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:49.768895    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:49.786224    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:49.786233    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:49.797741    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:49.797750    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:49.809062    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:49.809074    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:49.844635    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:49.844647    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:49.859144    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:49.859153    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:49.870031    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:49.870040    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:49.884598    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:49.884607    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:49.895668    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:49.895683    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:49.921049    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:49.921055    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:49.943168    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:49.943180    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:49.955159    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:49.955169    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:49.972046    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:49.972058    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:49.983550    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:49.983560    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:51:52.496982    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:51:57.499845    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:51:57.500062    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:51:57.529371    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:51:57.529496    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:51:57.549008    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:51:57.549081    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:51:57.564448    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:51:57.564519    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:51:57.577102    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:51:57.577180    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:51:57.589144    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:51:57.589201    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:51:57.599593    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:51:57.599650    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:51:57.609763    8917 logs.go:276] 0 containers: []
	W0408 10:51:57.609773    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:51:57.609827    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:51:57.620498    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:51:57.620517    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:51:57.620523    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:51:57.634503    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:51:57.634517    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:51:57.651839    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:51:57.651851    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:51:57.672842    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:51:57.672854    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:51:57.698074    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:51:57.698087    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:51:57.709812    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:51:57.709826    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:51:57.746966    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:51:57.746975    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:51:57.750928    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:51:57.750934    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:51:57.761427    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:51:57.761438    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:51:57.778094    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:51:57.778105    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:51:57.789429    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:51:57.789440    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:51:57.800726    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:51:57.800737    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:51:57.826314    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:51:57.826320    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:51:57.861340    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:51:57.861353    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:51:57.876088    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:51:57.876100    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:51:57.887830    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:51:57.887840    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:51:57.899619    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:51:57.899630    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:00.412631    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:05.414631    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:05.414873    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:05.432573    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:05.432661    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:05.446106    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:05.446183    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:05.458315    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:05.458391    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:05.470276    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:05.470355    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:05.480642    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:05.480706    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:05.491300    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:05.491389    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:05.501829    8917 logs.go:276] 0 containers: []
	W0408 10:52:05.501839    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:05.501893    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:05.522041    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:05.522062    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:05.522067    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:05.556897    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:05.556912    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:05.593850    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:05.593857    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:05.605293    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:05.605304    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:05.616902    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:05.616913    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:05.640005    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:05.640015    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:05.657624    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:05.657638    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:05.674845    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:05.674854    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:05.686428    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:05.686442    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:05.712042    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:05.712049    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:05.724214    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:05.724227    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:05.728450    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:05.728457    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:05.742177    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:05.742186    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:05.753485    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:05.753497    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:05.768424    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:05.768432    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:05.780363    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:05.780376    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:05.791560    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:05.791570    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:08.307428    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:13.309495    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:13.309612    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:13.320839    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:13.320912    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:13.331727    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:13.331794    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:13.345369    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:13.345437    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:13.361630    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:13.361704    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:13.379075    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:13.379151    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:13.390546    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:13.390614    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:13.400839    8917 logs.go:276] 0 containers: []
	W0408 10:52:13.400853    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:13.400909    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:13.412352    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:13.412370    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:13.412376    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:13.425907    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:13.425917    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:13.462302    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:13.462313    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:13.476579    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:13.476593    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:13.501140    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:13.501165    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:13.521436    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:13.521456    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:13.535406    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:13.535420    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:13.548322    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:13.548336    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:13.562274    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:13.562289    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:13.568019    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:13.568035    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:13.593708    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:13.593729    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:13.608170    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:13.608183    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:13.621428    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:13.621440    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:13.664347    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:13.664370    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:13.677399    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:13.677413    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:13.694578    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:13.694591    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:13.715440    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:13.715454    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:16.246273    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:21.248765    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:21.249114    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:21.285999    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:21.286111    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:21.302789    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:21.302880    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:21.316424    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:21.316506    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:21.329377    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:21.329448    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:21.340217    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:21.340295    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:21.350866    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:21.350931    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:21.360692    8917 logs.go:276] 0 containers: []
	W0408 10:52:21.360705    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:21.360770    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:21.374751    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:21.374771    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:21.374777    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:21.397525    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:21.397534    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:21.412212    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:21.412225    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:21.427552    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:21.427565    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:21.451255    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:21.451262    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:21.485991    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:21.486000    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:21.500189    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:21.500199    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:21.537270    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:21.537277    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:21.552314    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:21.552326    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:21.568350    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:21.568360    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:21.579467    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:21.579482    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:21.591810    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:21.591828    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:21.609555    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:21.609567    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:21.627131    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:21.627142    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:21.639712    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:21.639723    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:21.657154    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:21.657165    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:21.668385    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:21.668401    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:24.173208    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:29.175361    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:29.175453    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:29.187239    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:29.187321    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:29.198924    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:29.199008    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:29.211755    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:29.211842    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:29.223164    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:29.223247    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:29.235254    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:29.235338    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:29.249510    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:29.249603    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:29.261347    8917 logs.go:276] 0 containers: []
	W0408 10:52:29.261359    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:29.261421    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:29.272545    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:29.272565    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:29.272571    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:29.285339    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:29.285350    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:29.297976    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:29.297988    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:29.310772    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:29.310785    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:29.323371    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:29.323384    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:29.348389    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:29.348407    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:29.365536    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:29.365552    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:29.397149    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:29.397162    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:29.427897    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:29.427911    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:29.452977    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:29.452997    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:29.490801    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:29.490812    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:29.505745    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:29.505756    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:29.523301    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:29.523312    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:29.562175    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:29.562184    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:29.567054    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:29.567062    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:29.581973    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:29.581985    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:29.593895    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:29.593904    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:32.107535    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:37.109935    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:37.110391    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:37.156035    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:37.156173    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:37.175088    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:37.175172    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:37.188676    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:37.188752    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:37.200844    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:37.200922    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:37.211669    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:37.211743    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:37.223322    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:37.223392    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:37.233303    8917 logs.go:276] 0 containers: []
	W0408 10:52:37.233314    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:37.233373    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:37.252582    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:37.252604    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:37.252609    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:37.267838    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:37.267850    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:37.279187    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:37.279198    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:37.294073    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:37.294087    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:37.305548    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:37.305558    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:37.343368    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:37.343377    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:37.383171    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:37.383186    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:37.403744    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:37.403758    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:37.416424    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:37.416438    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:37.429569    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:37.429581    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:37.443855    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:37.443868    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:37.459218    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:37.459233    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:37.472629    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:37.472641    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:37.498628    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:37.498647    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:37.503590    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:37.503601    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:37.530027    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:37.530045    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:37.549122    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:37.549136    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:40.063511    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:45.063911    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:45.064078    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:45.085193    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:45.085285    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:45.101536    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:45.101608    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:45.113031    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:45.113107    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:45.125169    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:45.125243    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:45.136092    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:45.136166    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:45.146581    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:45.146646    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:45.156627    8917 logs.go:276] 0 containers: []
	W0408 10:52:45.156637    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:45.156693    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:45.172132    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:45.172149    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:45.172155    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:45.176575    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:45.176585    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:45.210858    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:45.210869    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:45.222639    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:45.222651    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:45.238708    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:45.238717    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:45.262002    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:45.262013    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:45.301402    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:45.301412    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:45.313079    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:45.313090    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:45.324711    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:45.324723    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:45.343653    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:45.343664    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:45.357599    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:45.357613    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:45.379938    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:45.379952    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:45.395393    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:45.395403    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:45.407182    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:45.407193    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:45.425905    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:45.425919    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:45.443820    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:45.443831    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:45.455746    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:45.455757    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:47.969006    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:52.971677    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:52.971787    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:52.982920    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:52.982995    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:52.993819    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:52.993890    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:53.005686    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:53.005767    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:53.016856    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:53.016924    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:53.027592    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:53.027656    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:53.038691    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:53.038760    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:53.054203    8917 logs.go:276] 0 containers: []
	W0408 10:52:53.054215    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:53.054273    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:53.065502    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:53.065524    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:53.065529    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:53.077926    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:53.077938    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:53.090271    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:53.090284    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:53.129370    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:53.129382    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:53.145101    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:53.145115    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:53.163583    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:53.163621    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:53.175731    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:53.175744    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:53.203185    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:53.203202    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:53.218682    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:53.218694    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:53.242641    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:53.242664    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:53.258088    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:53.258100    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:53.262884    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:53.262893    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:53.284863    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:53.284874    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:53.299090    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:53.299100    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:53.310954    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:53.310964    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:53.322988    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:53.323001    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:53.361054    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:53.361062    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:55.876739    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:00.879153    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:00.879289    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:00.899668    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:00.899743    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:00.910647    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:00.910711    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:00.921419    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:00.921492    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:00.931954    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:00.932024    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:00.943378    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:00.943446    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:00.954370    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:00.954437    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:00.965330    8917 logs.go:276] 0 containers: []
	W0408 10:53:00.965342    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:00.965403    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:00.975828    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:00.975848    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:00.975856    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:00.999155    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:00.999165    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:01.013262    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:01.013276    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:01.024865    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:01.024876    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:01.036364    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:01.036375    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:01.048832    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:01.048843    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:01.087677    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:01.087684    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:01.122436    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:01.122449    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:01.134416    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:01.134427    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:01.147549    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:01.147559    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:01.161670    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:01.161684    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:01.174789    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:01.174800    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:01.189607    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:01.189620    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:01.194339    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:01.194346    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:01.212691    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:01.212701    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:01.238285    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:01.238300    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:01.260832    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:01.260850    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:03.782408    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:08.784764    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:08.785209    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:08.823679    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:08.823827    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:08.846372    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:08.846498    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:08.861900    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:08.861980    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:08.874927    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:08.875008    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:08.891298    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:08.891373    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:08.906906    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:08.906973    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:08.917939    8917 logs.go:276] 0 containers: []
	W0408 10:53:08.917957    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:08.918021    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:08.935419    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:08.935439    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:08.935445    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:08.967964    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:08.967981    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:08.987668    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:08.987680    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:09.002639    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:09.002649    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:09.020269    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:09.020278    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:09.031209    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:09.031220    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:09.070108    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:09.070114    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:09.083782    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:09.083793    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:09.095699    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:09.095710    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:09.106600    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:09.106615    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:09.120883    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:09.120895    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:09.135004    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:09.135016    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:09.169262    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:09.169274    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:09.192614    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:09.192626    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:09.204428    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:09.204442    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:09.219724    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:09.219736    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:09.242498    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:09.242509    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:11.750558    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:16.751262    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:16.751372    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:16.763835    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:16.763914    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:16.775954    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:16.776031    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:16.787841    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:16.787913    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:16.799786    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:16.799862    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:16.811487    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:16.811563    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:16.823633    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:16.823712    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:16.835572    8917 logs.go:276] 0 containers: []
	W0408 10:53:16.835583    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:16.835650    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:16.847706    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:16.847727    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:16.847733    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:16.861167    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:16.861181    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:16.908608    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:16.908624    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:16.923781    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:16.923795    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:16.943318    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:16.943332    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:16.956201    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:16.956215    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:16.974243    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:16.974257    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:16.986959    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:16.986974    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:17.029663    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:17.029680    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:17.034836    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:17.034848    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:17.051156    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:17.051202    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:17.075884    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:17.075898    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:17.089233    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:17.089246    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:17.115153    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:17.115173    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:17.134325    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:17.134341    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:17.151099    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:17.151111    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:17.170372    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:17.170384    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:19.690152    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:24.692463    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:24.692770    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:24.722308    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:24.722442    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:24.741782    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:24.741875    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:24.755492    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:24.755574    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:24.769812    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:24.769881    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:24.784590    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:24.784652    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:24.799405    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:24.799479    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:24.814749    8917 logs.go:276] 0 containers: []
	W0408 10:53:24.814762    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:24.814829    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:24.829521    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:24.829554    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:24.829562    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:24.868479    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:24.868490    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:24.904125    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:24.904139    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:24.927469    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:24.927479    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:24.938655    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:24.938664    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:24.952792    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:24.952803    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:24.970410    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:24.970424    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:24.984754    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:24.984769    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:24.998533    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:24.998546    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:25.018288    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:25.018300    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:25.033027    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:25.033041    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:25.052559    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:25.052573    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:25.065375    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:25.065389    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:25.090642    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:25.090659    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:25.095709    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:25.095721    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:25.108690    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:25.108703    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:25.121341    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:25.121353    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:27.635548    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:32.638194    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:32.638364    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:32.654265    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:32.654336    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:32.665869    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:32.665943    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:32.676691    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:32.676754    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:32.687276    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:32.687348    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:32.697736    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:32.697809    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:32.708110    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:32.708175    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:32.720950    8917 logs.go:276] 0 containers: []
	W0408 10:53:32.720962    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:32.721020    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:32.735765    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:32.735783    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:32.735788    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:32.774990    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:32.775000    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:32.779889    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:32.779897    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:32.793627    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:32.793638    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:32.809940    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:32.809950    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:32.822265    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:32.822276    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:32.839938    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:32.839949    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:32.851091    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:32.851107    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:32.862533    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:32.862544    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:32.884489    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:32.884507    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:32.896171    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:32.896184    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:32.913608    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:32.913619    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:32.925160    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:32.925171    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:32.960430    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:32.962011    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:32.976521    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:32.976534    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:33.000986    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:33.000998    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:33.014728    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:33.014740    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:35.532844    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:40.535108    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:40.535219    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:40.546884    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:40.546962    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:40.567370    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:40.567440    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:40.582484    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:40.582578    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:40.593220    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:40.593293    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:40.603951    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:40.604021    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:40.615376    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:40.615450    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:40.626399    8917 logs.go:276] 0 containers: []
	W0408 10:53:40.626411    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:40.626474    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:40.638045    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:40.638064    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:40.638070    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:40.643136    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:40.643148    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:40.654876    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:40.654891    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:40.695334    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:40.695352    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:40.715114    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:40.715126    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:40.738680    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:40.738692    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:40.751048    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:40.751059    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:40.775178    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:40.775195    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:40.790410    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:40.790423    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:40.801955    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:40.801966    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:40.816907    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:40.816918    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:40.828823    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:40.828839    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:40.841345    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:40.841357    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:40.853156    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:40.853169    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:40.892586    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:40.892598    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:40.915862    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:40.915873    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:40.933110    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:40.933121    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:43.446683    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:48.449002    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:48.449063    8917 kubeadm.go:591] duration metric: took 4m4.196734834s to restartPrimaryControlPlane
	W0408 10:53:48.449121    8917 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 10:53:48.449151    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 10:53:49.434095    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 10:53:49.439180    8917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:53:49.441974    8917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:53:49.444798    8917 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 10:53:49.444803    8917 kubeadm.go:156] found existing configuration files:
	
	I0408 10:53:49.444825    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/admin.conf
	I0408 10:53:49.447747    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 10:53:49.447773    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:53:49.450272    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/kubelet.conf
	I0408 10:53:49.453293    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 10:53:49.453318    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:53:49.456382    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/controller-manager.conf
	I0408 10:53:49.459040    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 10:53:49.459061    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:53:49.461768    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/scheduler.conf
	I0408 10:53:49.464758    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 10:53:49.464779    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:53:49.467626    8917 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 10:53:49.486204    8917 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 10:53:49.486269    8917 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 10:53:49.536043    8917 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 10:53:49.536104    8917 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 10:53:49.536173    8917 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 10:53:49.585483    8917 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 10:53:49.589688    8917 out.go:204]   - Generating certificates and keys ...
	I0408 10:53:49.589721    8917 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 10:53:49.589750    8917 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 10:53:49.589794    8917 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 10:53:49.589838    8917 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 10:53:49.589876    8917 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 10:53:49.589911    8917 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 10:53:49.589943    8917 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 10:53:49.589980    8917 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 10:53:49.590031    8917 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 10:53:49.590069    8917 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 10:53:49.590086    8917 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 10:53:49.590112    8917 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 10:53:49.614914    8917 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 10:53:49.759706    8917 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 10:53:49.845325    8917 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 10:53:49.923793    8917 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 10:53:49.955317    8917 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 10:53:49.955624    8917 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 10:53:49.955649    8917 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 10:53:50.037891    8917 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 10:53:50.042015    8917 out.go:204]   - Booting up control plane ...
	I0408 10:53:50.042159    8917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 10:53:50.042199    8917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 10:53:50.043278    8917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 10:53:50.043655    8917 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 10:53:50.044606    8917 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 10:53:54.546607    8917 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501536 seconds
	I0408 10:53:54.546673    8917 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 10:53:54.550036    8917 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 10:53:55.065727    8917 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 10:53:55.066012    8917 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 10:53:55.570721    8917 kubeadm.go:309] [bootstrap-token] Using token: lglln4.ya05jpnnv5na7a65
	I0408 10:53:55.576359    8917 out.go:204]   - Configuring RBAC rules ...
	I0408 10:53:55.576422    8917 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 10:53:55.576481    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 10:53:55.580905    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 10:53:55.581777    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 10:53:55.582521    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 10:53:55.583418    8917 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 10:53:55.586474    8917 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 10:53:55.770007    8917 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 10:53:55.979154    8917 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 10:53:55.979635    8917 kubeadm.go:309] 
	I0408 10:53:55.979665    8917 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 10:53:55.979669    8917 kubeadm.go:309] 
	I0408 10:53:55.979705    8917 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 10:53:55.979708    8917 kubeadm.go:309] 
	I0408 10:53:55.979721    8917 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 10:53:55.979749    8917 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 10:53:55.979784    8917 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 10:53:55.979789    8917 kubeadm.go:309] 
	I0408 10:53:55.979814    8917 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 10:53:55.979817    8917 kubeadm.go:309] 
	I0408 10:53:55.979844    8917 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 10:53:55.979848    8917 kubeadm.go:309] 
	I0408 10:53:55.979873    8917 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 10:53:55.979917    8917 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 10:53:55.979961    8917 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 10:53:55.979965    8917 kubeadm.go:309] 
	I0408 10:53:55.980011    8917 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 10:53:55.980097    8917 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 10:53:55.980102    8917 kubeadm.go:309] 
	I0408 10:53:55.980153    8917 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token lglln4.ya05jpnnv5na7a65 \
	I0408 10:53:55.980258    8917 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 \
	I0408 10:53:55.980269    8917 kubeadm.go:309] 	--control-plane 
	I0408 10:53:55.980272    8917 kubeadm.go:309] 
	I0408 10:53:55.980322    8917 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 10:53:55.980327    8917 kubeadm.go:309] 
	I0408 10:53:55.980366    8917 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token lglln4.ya05jpnnv5na7a65 \
	I0408 10:53:55.980423    8917 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 
	I0408 10:53:55.980489    8917 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 10:53:55.980496    8917 cni.go:84] Creating CNI manager for ""
	I0408 10:53:55.980503    8917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:53:55.987748    8917 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 10:53:55.991767    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 10:53:55.995221    8917 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 10:53:56.000678    8917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 10:53:56.000735    8917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 10:53:56.000736    8917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-603000 minikube.k8s.io/updated_at=2024_04_08T10_53_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=running-upgrade-603000 minikube.k8s.io/primary=true
	I0408 10:53:56.035454    8917 kubeadm.go:1107] duration metric: took 34.768084ms to wait for elevateKubeSystemPrivileges
	I0408 10:53:56.045083    8917 ops.go:34] apiserver oom_adj: -16
	W0408 10:53:56.045108    8917 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 10:53:56.045118    8917 kubeadm.go:393] duration metric: took 4m11.806888917s to StartCluster
	I0408 10:53:56.045127    8917 settings.go:142] acquiring lock: {Name:mk6ed0f877152c89dfeb4cfbed60423b324ecbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:56.045294    8917 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:53:56.045730    8917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:56.045922    8917 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:53:56.050680    8917 out.go:177] * Verifying Kubernetes components...
	I0408 10:53:56.045947    8917 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 10:53:56.046113    8917 config.go:182] Loaded profile config "running-upgrade-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:53:56.057723    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:56.057744    8917 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-603000"
	I0408 10:53:56.057763    8917 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-603000"
	W0408 10:53:56.057770    8917 addons.go:243] addon storage-provisioner should already be in state true
	I0408 10:53:56.057787    8917 host.go:66] Checking if "running-upgrade-603000" exists ...
	I0408 10:53:56.057745    8917 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-603000"
	I0408 10:53:56.057800    8917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-603000"
	I0408 10:53:56.058905    8917 kapi.go:59] client config for running-upgrade-603000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604fa70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:53:56.059250    8917 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-603000"
	W0408 10:53:56.059255    8917 addons.go:243] addon default-storageclass should already be in state true
	I0408 10:53:56.059261    8917 host.go:66] Checking if "running-upgrade-603000" exists ...
	I0408 10:53:56.063680    8917 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:56.066784    8917 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:53:56.066790    8917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 10:53:56.066796    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:53:56.067608    8917 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 10:53:56.067611    8917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 10:53:56.067615    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:53:56.156097    8917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:53:56.161096    8917 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:53:56.161134    8917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:56.164902    8917 api_server.go:72] duration metric: took 118.96625ms to wait for apiserver process to appear ...
	I0408 10:53:56.164910    8917 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:53:56.164918    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:56.218021    8917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:53:56.219158    8917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 10:54:01.167112    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:01.167144    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:06.167502    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:06.167549    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:11.167970    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:11.168006    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:16.168524    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:16.168578    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:21.169327    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:21.169346    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:26.170176    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:26.170220    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 10:54:26.560534    8917 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 10:54:26.565911    8917 out.go:177] * Enabled addons: storage-provisioner
	I0408 10:54:26.577842    8917 addons.go:505] duration metric: took 30.531701833s for enable addons: enabled=[storage-provisioner]
	I0408 10:54:31.171627    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:31.171679    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:36.173249    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:36.173269    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:41.174327    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:41.174368    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:46.176609    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:46.176662    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:51.178982    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:51.179058    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:56.181568    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:56.181773    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:56.202682    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:54:56.202767    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:56.214250    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:54:56.214323    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:56.225555    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:54:56.225628    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:56.235762    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:54:56.235837    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:56.246204    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:54:56.246271    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:56.261485    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:54:56.261565    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:56.272710    8917 logs.go:276] 0 containers: []
	W0408 10:54:56.272727    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:56.272791    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:56.283969    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:54:56.283989    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:54:56.283997    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:54:56.296128    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:54:56.296139    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:54:56.307420    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:54:56.307433    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:54:56.322755    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:56.322769    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:56.356407    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:56.356418    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:56.361049    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:56.361058    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:56.398197    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:54:56.398210    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:54:56.413653    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:54:56.413665    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:54:56.428432    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:54:56.428445    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:54:56.440438    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:54:56.440450    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:56.451747    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:54:56.451758    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:54:56.469011    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:54:56.469022    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:54:56.480265    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:56.480275    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:59.007435    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:04.010181    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:04.010349    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:04.025195    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:04.025280    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:04.044954    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:04.045031    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:04.056616    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:04.056686    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:04.066919    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:04.066986    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:04.077067    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:04.077143    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:04.087668    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:04.087739    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:04.097694    8917 logs.go:276] 0 containers: []
	W0408 10:55:04.097704    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:04.097764    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:04.108100    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:04.108115    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:04.108121    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:04.123292    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:04.123303    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:04.158146    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:04.158154    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:04.195550    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:04.195563    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:04.210253    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:04.210264    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:04.224456    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:04.224469    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:04.241595    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:04.241606    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:04.252908    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:04.252920    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:04.278672    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:04.278681    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:04.290385    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:04.290396    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:04.294934    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:04.294941    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:04.306145    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:04.306156    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:04.318382    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:04.318394    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:06.834679    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:11.837248    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:11.837463    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:11.868164    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:11.868261    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:11.883214    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:11.883293    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:11.895638    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:11.895714    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:11.906465    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:11.906526    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:11.916806    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:11.916880    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:11.927205    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:11.927277    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:11.937465    8917 logs.go:276] 0 containers: []
	W0408 10:55:11.937475    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:11.937527    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:11.950524    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:11.950542    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:11.950547    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:11.967871    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:11.967882    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:11.991216    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:11.991224    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:12.025160    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:12.025171    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:12.029462    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:12.029471    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:12.063497    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:12.063511    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:12.078037    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:12.078048    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:12.089388    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:12.089400    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:12.100522    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:12.100532    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:12.114847    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:12.114860    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:12.131995    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:12.132007    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:12.147164    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:12.147175    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:12.158912    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:12.158926    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:14.670665    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:19.671725    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:19.671968    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:19.700495    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:19.700617    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:19.716640    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:19.716728    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:19.730175    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:19.730255    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:19.741372    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:19.741443    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:19.751617    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:19.751690    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:19.761909    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:19.761979    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:19.772046    8917 logs.go:276] 0 containers: []
	W0408 10:55:19.772058    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:19.772119    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:19.782790    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:19.782804    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:19.782809    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:19.794584    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:19.794596    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:19.806080    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:19.806092    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:19.829120    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:19.829130    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:19.840224    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:19.840235    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:19.875599    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:19.875612    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:19.910084    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:19.910097    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:19.924675    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:19.924685    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:19.939109    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:19.939122    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:19.954092    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:19.954103    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:19.971982    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:19.971994    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:19.976507    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:19.976515    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:19.990659    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:19.990669    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:22.504304    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:27.506603    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:27.506752    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:27.521309    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:27.521395    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:27.533101    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:27.533170    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:27.548667    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:27.548739    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:27.559123    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:27.559200    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:27.569689    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:27.569763    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:27.586672    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:27.586742    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:27.602587    8917 logs.go:276] 0 containers: []
	W0408 10:55:27.602599    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:27.602657    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:27.612818    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:27.612832    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:27.612839    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:27.646944    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:27.646957    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:27.662670    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:27.662682    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:27.676545    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:27.676556    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:27.691151    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:27.691162    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:27.709073    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:27.709084    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:27.720932    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:27.720943    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:27.755622    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:27.755634    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:27.767037    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:27.767048    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:27.778629    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:27.778642    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:27.789855    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:27.789866    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:27.813914    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:27.813924    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:27.824901    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:27.824914    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:30.330223    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:35.332865    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:35.333034    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:35.357611    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:35.357691    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:35.368661    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:35.368731    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:35.378907    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:35.378975    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:35.389861    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:35.389932    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:35.400222    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:35.400292    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:35.410860    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:35.410935    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:35.421245    8917 logs.go:276] 0 containers: []
	W0408 10:55:35.421256    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:35.421318    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:35.431964    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:35.431980    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:35.431986    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:35.466453    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:35.466466    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:35.481310    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:35.481322    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:35.495597    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:35.495607    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:35.508309    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:35.508323    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:35.521590    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:35.521600    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:35.539219    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:35.539230    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:35.573601    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:35.573609    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:35.577928    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:35.577935    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:35.589226    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:35.589238    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:35.601082    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:35.601094    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:35.624289    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:35.624296    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:35.638438    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:35.638450    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:38.154864    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:43.157186    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:43.157346    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:43.177286    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:43.177390    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:43.192152    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:43.192231    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:43.204268    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:43.204346    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:43.215476    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:43.215539    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:43.226130    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:43.226221    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:43.237581    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:43.237648    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:43.247749    8917 logs.go:276] 0 containers: []
	W0408 10:55:43.247762    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:43.247817    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:43.258322    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:43.258337    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:43.258342    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:43.273671    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:43.273680    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:43.285531    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:43.285540    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:43.302032    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:43.302042    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:43.325275    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:43.325282    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:43.336411    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:43.336421    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:43.347792    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:43.347803    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:43.352777    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:43.352787    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:43.388146    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:43.388158    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:43.402631    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:43.402641    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:43.416468    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:43.416479    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:43.428336    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:43.428348    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:43.447758    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:43.447769    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:45.984379    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:50.986765    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:50.986864    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:50.998115    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:50.998184    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:51.008269    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:51.008337    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:51.021646    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:51.021730    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:51.032126    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:51.032213    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:51.042642    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:51.042717    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:51.052757    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:51.052828    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:51.062600    8917 logs.go:276] 0 containers: []
	W0408 10:55:51.062612    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:51.062664    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:51.072906    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:51.072919    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:51.072925    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:51.109674    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:51.109685    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:51.125973    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:51.125984    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:51.138315    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:51.138326    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:51.149417    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:51.149432    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:51.161229    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:51.161239    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:51.172658    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:51.172673    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:51.184176    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:51.184187    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:51.219122    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:51.219131    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:51.236518    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:51.236531    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:51.251204    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:51.251215    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:51.268949    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:51.268960    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:51.298457    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:51.298465    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:53.804560    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:58.805338    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:58.805427    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:58.816956    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:58.817025    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:58.827619    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:58.827687    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:58.838366    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:58.838434    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:58.849171    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:58.849244    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:58.860809    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:58.860883    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:58.871273    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:58.871332    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:58.880981    8917 logs.go:276] 0 containers: []
	W0408 10:55:58.880993    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:58.881052    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:58.891564    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:58.891580    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:58.891586    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:58.929202    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:58.929213    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:58.947370    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:58.947381    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:58.959532    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:58.959543    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:58.971006    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:58.971016    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:58.995754    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:58.995762    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:59.007409    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:59.007420    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:59.024772    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:59.024782    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:59.036128    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:59.036139    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:59.070831    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:59.070838    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:59.075603    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:59.075609    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:59.089739    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:59.089754    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:59.101300    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:59.101311    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:01.618078    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:06.618568    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:06.618690    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:06.630548    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:06.630625    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:06.641670    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:06.641743    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:06.653255    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:56:06.653333    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:06.664865    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:06.664942    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:06.676620    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:06.676691    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:06.688271    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:06.688345    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:06.699223    8917 logs.go:276] 0 containers: []
	W0408 10:56:06.699235    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:06.699294    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:06.710734    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:06.710749    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:06.710754    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:06.733852    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:06.733864    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:06.758839    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:06.758850    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:06.773148    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:06.773160    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:06.807756    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:06.807768    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:06.819970    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:06.819981    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:06.834578    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:06.834588    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:06.854040    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:06.854051    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:06.865845    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:06.865854    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:06.877606    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:06.877615    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:06.910586    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:06.910592    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:06.915235    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:06.915241    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:06.929906    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:06.929916    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:09.453111    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:14.455701    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:14.455790    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:14.467943    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:14.468023    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:14.479444    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:14.479519    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:14.491574    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:14.491660    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:14.502920    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:14.502994    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:14.514592    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:14.514668    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:14.528486    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:14.528569    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:14.539576    8917 logs.go:276] 0 containers: []
	W0408 10:56:14.539588    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:14.539652    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:14.550571    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:14.550591    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:14.550597    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:14.587979    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:14.587993    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:14.603560    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:14.603570    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:14.618028    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:14.618040    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:14.644274    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:14.644286    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:14.655835    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:14.655845    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:14.660685    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:14.660691    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:14.673417    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:14.673431    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:14.684864    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:14.684875    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:14.696548    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:14.696559    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:14.709449    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:14.709460    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:14.728229    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:14.728239    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:14.739726    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:14.739738    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:14.780147    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:14.780161    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:14.794779    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:14.794790    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:17.314887    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:22.315649    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:22.315728    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:22.328621    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:22.328692    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:22.342211    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:22.342279    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:22.354556    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:22.354629    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:22.366534    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:22.366611    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:22.378071    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:22.378147    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:22.389245    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:22.389302    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:22.400611    8917 logs.go:276] 0 containers: []
	W0408 10:56:22.400623    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:22.400682    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:22.412068    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:22.412087    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:22.412093    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:22.449697    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:22.449724    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:22.455639    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:22.455652    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:22.470978    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:22.470995    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:22.486853    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:22.486871    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:22.505173    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:22.505185    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:22.518389    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:22.518402    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:22.534910    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:22.534924    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:22.548942    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:22.548955    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:22.584219    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:22.584229    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:22.595679    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:22.595688    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:22.614264    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:22.614277    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:22.631830    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:22.631840    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:22.655372    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:22.655382    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:22.666782    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:22.666791    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:25.182689    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:30.185134    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:30.185215    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:30.200442    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:30.200518    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:30.216807    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:30.216877    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:30.228121    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:30.228221    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:30.240927    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:30.241004    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:30.252318    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:30.252388    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:30.263730    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:30.263807    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:30.274587    8917 logs.go:276] 0 containers: []
	W0408 10:56:30.274599    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:30.274662    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:30.286191    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:30.286208    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:30.286213    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:30.301284    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:30.301292    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:30.314006    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:30.314016    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:30.331070    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:30.331081    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:30.348628    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:30.348637    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:30.374681    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:30.374696    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:30.387104    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:30.387117    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:30.399991    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:30.400002    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:30.411730    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:30.411741    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:30.424271    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:30.424283    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:30.442301    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:30.442318    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:30.454870    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:30.454881    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:30.491200    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:30.491212    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:30.511816    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:30.511827    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:30.547821    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:30.547829    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:33.054288    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:38.056560    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:38.056638    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:38.068254    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:38.068323    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:38.079779    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:38.079874    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:38.090785    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:38.090863    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:38.102803    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:38.102875    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:38.114185    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:38.114256    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:38.125916    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:38.125993    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:38.136968    8917 logs.go:276] 0 containers: []
	W0408 10:56:38.136981    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:38.137040    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:38.148285    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:38.148304    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:38.148309    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:38.184468    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:38.184479    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:38.200426    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:38.200437    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:38.213055    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:38.213067    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:38.232435    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:38.232445    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:38.245123    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:38.245134    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:38.258239    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:38.258252    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:38.299290    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:38.299305    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:38.314014    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:38.314025    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:38.338876    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:38.338889    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:38.351941    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:38.351954    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:38.367186    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:38.367202    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:38.379297    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:38.379313    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:38.403911    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:38.403920    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:38.408637    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:38.408644    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:40.926424    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:45.927337    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:45.927563    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:45.948599    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:45.948695    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:45.964430    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:45.964515    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:45.977402    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:45.977472    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:45.989166    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:45.989241    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:46.001634    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:46.001711    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:46.013388    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:46.013460    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:46.030721    8917 logs.go:276] 0 containers: []
	W0408 10:56:46.030735    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:46.030805    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:46.042739    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:46.042758    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:46.042763    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:46.055566    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:46.055581    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:46.072306    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:46.072321    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:46.077414    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:46.077425    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:46.113526    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:46.113540    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:46.134510    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:46.134523    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:46.147923    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:46.147934    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:46.184495    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:46.184512    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:46.197171    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:46.197183    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:46.210065    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:46.210078    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:46.235241    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:46.235254    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:46.253646    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:46.253658    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:46.272145    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:46.272156    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:46.286674    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:46.286683    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:46.301839    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:46.301852    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:48.816447    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:53.818725    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:53.819317    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:53.868286    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:53.868419    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:53.888313    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:53.888399    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:53.902975    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:53.903061    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:53.915375    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:53.915454    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:53.931094    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:53.931171    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:53.944939    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:53.944968    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:53.957364    8917 logs.go:276] 0 containers: []
	W0408 10:56:53.957376    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:53.957441    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:53.971006    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:53.971023    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:53.971029    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:53.976237    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:53.976251    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:53.989921    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:53.989933    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:54.002980    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:54.002996    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:54.021872    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:54.021890    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:54.034185    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:54.034196    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:54.048579    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:54.048593    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:54.085585    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:54.085601    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:54.101624    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:54.101639    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:54.126819    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:54.126830    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:54.141369    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:54.141379    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:54.178957    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:54.178970    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:54.198851    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:54.198862    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:54.220345    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:54.220356    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:54.246625    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:54.246643    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:56.764764    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:01.767224    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:01.767459    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:01.785067    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:01.785156    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:01.798588    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:01.798661    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:01.810445    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:01.810520    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:01.821961    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:01.822031    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:01.833142    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:01.833219    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:01.846983    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:01.847061    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:01.860007    8917 logs.go:276] 0 containers: []
	W0408 10:57:01.860021    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:01.860088    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:01.879169    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:01.879187    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:01.879192    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:01.895863    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:01.895871    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:01.910603    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:01.910615    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:01.923593    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:01.923604    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:01.950718    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:01.950740    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:01.964054    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:01.964067    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:01.999731    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:01.999742    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:02.005000    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:02.005011    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:02.041415    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:02.041428    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:02.056869    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:02.056882    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:02.070878    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:02.070889    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:02.090208    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:02.090223    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:02.103057    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:02.103067    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:02.116370    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:02.116381    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:02.128933    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:02.128945    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:04.651124    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:09.653332    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:09.653466    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:09.671088    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:09.671169    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:09.682982    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:09.683050    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:09.693406    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:09.693484    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:09.703707    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:09.703774    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:09.715245    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:09.715315    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:09.730382    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:09.730456    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:09.741189    8917 logs.go:276] 0 containers: []
	W0408 10:57:09.741201    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:09.741261    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:09.752416    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:09.752436    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:09.752441    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:09.765546    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:09.765557    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:09.782224    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:09.782242    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:09.818995    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:09.819010    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:09.859463    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:09.859474    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:09.872252    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:09.872263    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:09.890712    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:09.890726    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:09.896347    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:09.896354    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:09.910966    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:09.910977    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:09.928641    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:09.928649    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:09.953731    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:09.953740    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:09.966594    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:09.966605    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:09.979372    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:09.979388    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:10.005271    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:10.005286    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:10.020362    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:10.020374    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:12.538055    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:17.540510    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:17.540735    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:17.559953    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:17.560069    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:17.574281    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:17.574358    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:17.585883    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:17.585954    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:17.596459    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:17.596533    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:17.611203    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:17.611268    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:17.621981    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:17.622055    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:17.633058    8917 logs.go:276] 0 containers: []
	W0408 10:57:17.633070    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:17.633132    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:17.645147    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:17.645171    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:17.645178    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:17.650207    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:17.650219    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:17.665823    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:17.665839    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:17.703096    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:17.703113    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:17.722324    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:17.722334    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:17.735383    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:17.735396    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:17.751555    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:17.751573    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:17.764121    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:17.764133    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:17.776539    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:17.776550    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:17.789070    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:17.789083    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:17.826799    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:17.826812    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:17.839058    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:17.839073    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:17.854846    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:17.854858    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:17.867195    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:17.867206    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:17.885453    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:17.885463    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:20.414262    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:25.416075    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:25.416422    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:25.452383    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:25.452523    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:25.470972    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:25.471062    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:25.484859    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:25.484941    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:25.496180    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:25.496262    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:25.509984    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:25.510055    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:25.520224    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:25.520294    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:25.531598    8917 logs.go:276] 0 containers: []
	W0408 10:57:25.531613    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:25.531667    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:25.543560    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:25.543578    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:25.543584    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:25.581058    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:25.581074    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:25.618793    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:25.618804    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:25.631651    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:25.631665    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:25.649839    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:25.649853    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:25.662058    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:25.662072    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:25.685394    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:25.685408    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:25.689836    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:25.689842    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:25.704436    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:25.704450    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:25.717033    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:25.717054    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:25.733195    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:25.733214    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:25.746146    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:25.746158    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:25.759202    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:25.759218    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:25.771991    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:25.772005    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:25.786371    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:25.786384    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:28.303371    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:33.305597    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:33.305702    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:33.316300    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:33.316371    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:33.332484    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:33.332555    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:33.361500    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:33.361584    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:33.381783    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:33.381861    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:33.393510    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:33.393592    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:33.405021    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:33.405102    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:33.415729    8917 logs.go:276] 0 containers: []
	W0408 10:57:33.415738    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:33.415781    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:33.427433    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:33.427451    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:33.427463    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:33.440429    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:33.440442    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:33.456115    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:33.456128    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:33.468669    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:33.468684    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:33.483801    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:33.483811    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:33.496548    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:33.496559    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:33.509797    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:33.509813    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:33.522589    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:33.522604    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:33.537389    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:33.537399    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:33.541877    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:33.541884    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:33.579860    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:33.579871    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:33.595818    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:33.595831    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:33.609104    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:33.609116    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:33.627993    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:33.628010    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:33.666859    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:33.666874    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:36.194075    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:41.196596    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:41.196952    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:41.238053    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:41.238166    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:41.256605    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:41.256702    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:41.271458    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:41.271533    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:41.282834    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:41.282913    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:41.293333    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:41.293394    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:41.303740    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:41.303808    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:41.314311    8917 logs.go:276] 0 containers: []
	W0408 10:57:41.314321    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:41.314373    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:41.325006    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:41.325022    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:41.325027    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:41.357998    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:41.358006    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:41.362262    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:41.362270    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:41.377954    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:41.377963    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:41.397169    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:41.397180    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:41.411981    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:41.411992    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:41.426996    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:41.427008    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:41.438863    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:41.438877    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:41.451999    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:41.452012    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:41.487408    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:41.487421    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:41.500074    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:41.500086    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:41.524016    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:41.524030    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:41.547708    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:41.547718    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:41.562394    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:41.562406    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:41.573832    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:41.573844    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:44.087626    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:49.089891    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:49.090100    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:49.112252    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:49.112353    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:49.127825    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:49.127911    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:49.141368    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:49.141448    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:49.152200    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:49.152270    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:49.162545    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:49.162620    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:49.173335    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:49.173407    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:49.183731    8917 logs.go:276] 0 containers: []
	W0408 10:57:49.183745    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:49.183799    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:49.194966    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:49.194986    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:49.194991    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:49.213421    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:49.213433    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:49.248172    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:49.248184    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:49.259940    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:49.259951    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:49.271211    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:49.271223    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:49.287715    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:49.287729    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:49.300715    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:49.300729    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:49.321771    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:49.321783    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:49.333795    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:49.333805    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:49.346083    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:49.346092    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:49.368800    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:49.368808    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:49.380040    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:49.380053    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:49.384629    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:49.384638    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:49.398866    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:49.398880    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:49.439322    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:49.439332    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:51.956375    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:56.958755    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:56.963367    8917 out.go:177] 
	W0408 10:57:56.966340    8917 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0408 10:57:56.966353    8917 out.go:239] * 
	* 
	W0408 10:57:56.967184    8917 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:57:56.977278    8917 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-603000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-04-08 10:57:57.068503 -0700 PDT m=+1345.949541126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-603000 -n running-upgrade-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-603000 -n running-upgrade-603000: exit status 2 (15.664693583s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-603000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-996000          | force-systemd-flag-996000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-117000              | force-systemd-env-117000  | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-117000           | force-systemd-env-117000  | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT | 08 Apr 24 10:48 PDT |
	| start   | -p docker-flags-838000                | docker-flags-838000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-996000             | force-systemd-flag-996000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-996000          | force-systemd-flag-996000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT | 08 Apr 24 10:48 PDT |
	| start   | -p cert-expiration-454000             | cert-expiration-454000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-838000 ssh               | docker-flags-838000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-838000 ssh               | docker-flags-838000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-838000                | docker-flags-838000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT | 08 Apr 24 10:48 PDT |
	| start   | -p cert-options-055000                | cert-options-055000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-055000 ssh               | cert-options-055000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-055000 -- sudo        | cert-options-055000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-055000                | cert-options-055000       | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:48 PDT | 08 Apr 24 10:48 PDT |
	| start   | -p running-upgrade-603000             | minikube                  | jenkins | v1.26.0        | 08 Apr 24 10:48 PDT | 08 Apr 24 10:49 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-603000             | running-upgrade-603000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:49 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-454000             | cert-expiration-454000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:51 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-454000             | cert-expiration-454000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:51 PDT | 08 Apr 24 10:51 PDT |
	| start   | -p kubernetes-upgrade-633000          | kubernetes-upgrade-633000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:51 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633000          | kubernetes-upgrade-633000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:51 PDT | 08 Apr 24 10:51 PDT |
	| start   | -p kubernetes-upgrade-633000          | kubernetes-upgrade-633000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:51 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1     |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633000          | kubernetes-upgrade-633000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:52 PDT | 08 Apr 24 10:52 PDT |
	| start   | -p stopped-upgrade-476000             | minikube                  | jenkins | v1.26.0        | 08 Apr 24 10:52 PDT | 08 Apr 24 10:52 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-476000 stop           | minikube                  | jenkins | v1.26.0        | 08 Apr 24 10:52 PDT | 08 Apr 24 10:52 PDT |
	| start   | -p stopped-upgrade-476000             | stopped-upgrade-476000    | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:52 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 10:52:52
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 10:52:52.588913    9084 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:52:52.589061    9084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:52:52.589065    9084 out.go:304] Setting ErrFile to fd 2...
	I0408 10:52:52.589068    9084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:52:52.589232    9084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:52:52.590387    9084 out.go:298] Setting JSON to false
	I0408 10:52:52.609840    9084 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6742,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:52:52.609899    9084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:52:52.614702    9084 out.go:177] * [stopped-upgrade-476000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:52:52.622648    9084 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:52:52.627604    9084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:52:52.622702    9084 notify.go:220] Checking for updates...
	I0408 10:52:52.633616    9084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:52:52.636636    9084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:52:52.639568    9084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:52:52.642637    9084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:52:52.645917    9084 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:52:52.649569    9084 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 10:52:52.652634    9084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:52:52.655554    9084 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:52:52.662614    9084 start.go:297] selected driver: qemu2
	I0408 10:52:52.662621    9084 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:52:52.662677    9084 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:52:52.665456    9084 cni.go:84] Creating CNI manager for ""
	I0408 10:52:52.665471    9084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:52:52.665494    9084 start.go:340] cluster config:
	{Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:52:52.665547    9084 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:52:52.672603    9084 out.go:177] * Starting "stopped-upgrade-476000" primary control-plane node in "stopped-upgrade-476000" cluster
	I0408 10:52:52.676666    9084 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 10:52:52.676684    9084 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0408 10:52:52.676696    9084 cache.go:56] Caching tarball of preloaded images
	I0408 10:52:52.676755    9084 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:52:52.676760    9084 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0408 10:52:52.676819    9084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/config.json ...
	I0408 10:52:52.677355    9084 start.go:360] acquireMachinesLock for stopped-upgrade-476000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:52:52.677387    9084 start.go:364] duration metric: took 25.709µs to acquireMachinesLock for "stopped-upgrade-476000"
	I0408 10:52:52.677395    9084 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:52:52.677399    9084 fix.go:54] fixHost starting: 
	I0408 10:52:52.677512    9084 fix.go:112] recreateIfNeeded on stopped-upgrade-476000: state=Stopped err=<nil>
	W0408 10:52:52.677520    9084 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:52:52.684634    9084 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-476000" ...
	I0408 10:52:47.969006    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:52:52.688722    9084 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51442-:22,hostfwd=tcp::51443-:2376,hostname=stopped-upgrade-476000 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/disk.qcow2
	I0408 10:52:52.737553    9084 main.go:141] libmachine: STDOUT: 
	I0408 10:52:52.737582    9084 main.go:141] libmachine: STDERR: 
	I0408 10:52:52.737588    9084 main.go:141] libmachine: Waiting for VM to start (ssh -p 51442 docker@127.0.0.1)...
	I0408 10:52:52.971677    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:52:52.971787    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:52:52.982920    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:52:52.982995    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:52:52.993819    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:52:52.993890    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:52:53.005686    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:52:53.005767    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:52:53.016856    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:52:53.016924    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:52:53.027592    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:52:53.027656    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:52:53.038691    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:52:53.038760    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:52:53.054203    8917 logs.go:276] 0 containers: []
	W0408 10:52:53.054215    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:52:53.054273    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:52:53.065502    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:52:53.065524    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:52:53.065529    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:52:53.077926    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:52:53.077938    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:52:53.090271    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:52:53.090284    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:52:53.129370    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:52:53.129382    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:52:53.145101    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:52:53.145115    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:52:53.163583    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:52:53.163621    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:52:53.175731    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:52:53.175744    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:52:53.203185    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:52:53.203202    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:52:53.218682    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:52:53.218694    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:52:53.242641    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:52:53.242664    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:52:53.258088    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:52:53.258100    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:52:53.262884    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:52:53.262893    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:52:53.284863    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:52:53.284874    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:52:53.299090    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:52:53.299100    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:52:53.310954    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:52:53.310964    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:52:53.322988    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:52:53.323001    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:52:53.361054    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:52:53.361062    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:52:55.876739    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:00.879153    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:00.879289    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:00.899668    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:00.899743    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:00.910647    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:00.910711    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:00.921419    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:00.921492    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:00.931954    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:00.932024    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:00.943378    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:00.943446    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:00.954370    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:00.954437    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:00.965330    8917 logs.go:276] 0 containers: []
	W0408 10:53:00.965342    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:00.965403    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:00.975828    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:00.975848    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:00.975856    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:00.999155    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:00.999165    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:01.013262    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:01.013276    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:01.024865    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:01.024876    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:01.036364    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:01.036375    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:01.048832    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:01.048843    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:01.087677    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:01.087684    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:01.122436    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:01.122449    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:01.134416    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:01.134427    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:01.147549    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:01.147559    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:01.161670    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:01.161684    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:01.174789    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:01.174800    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:01.189607    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:01.189620    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:01.194339    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:01.194346    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:01.212691    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:01.212701    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:01.238285    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:01.238300    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:01.260832    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:01.260850    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:03.782408    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:08.784764    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:08.785209    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:08.823679    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:08.823827    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:08.846372    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:08.846498    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:08.861900    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:08.861980    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:08.874927    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:08.875008    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:08.891298    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:08.891373    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:08.906906    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:08.906973    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:08.917939    8917 logs.go:276] 0 containers: []
	W0408 10:53:08.917957    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:08.918021    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:08.935419    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:08.935439    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:08.935445    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:08.967964    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:08.967981    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:08.987668    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:08.987680    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:09.002639    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:09.002649    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:09.020269    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:09.020278    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:09.031209    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:09.031220    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:09.070108    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:09.070114    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:09.083782    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:09.083793    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:09.095699    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:09.095710    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:09.106600    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:09.106615    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:09.120883    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:09.120895    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:09.135004    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:09.135016    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:09.169262    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:09.169274    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:09.192614    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:09.192626    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:09.204428    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:09.204442    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:09.219724    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:09.219736    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:09.242498    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:09.242509    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:11.750558    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:13.430943    9084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/config.json ...
	I0408 10:53:13.431749    9084 machine.go:94] provisionDockerMachine start ...
	I0408 10:53:13.431965    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.432520    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.432538    9084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 10:53:13.512780    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 10:53:13.512825    9084 buildroot.go:166] provisioning hostname "stopped-upgrade-476000"
	I0408 10:53:13.512946    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.513182    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.513194    9084 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-476000 && echo "stopped-upgrade-476000" | sudo tee /etc/hostname
	I0408 10:53:13.590866    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-476000
	
	I0408 10:53:13.590959    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.591144    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.591156    9084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 10:53:13.658265    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 10:53:13.658279    9084 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18585-6624/.minikube CaCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18585-6624/.minikube}
	I0408 10:53:13.658287    9084 buildroot.go:174] setting up certificates
	I0408 10:53:13.658293    9084 provision.go:84] configureAuth start
	I0408 10:53:13.658298    9084 provision.go:143] copyHostCerts
	I0408 10:53:13.658379    9084 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem, removing ...
	I0408 10:53:13.658387    9084 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem
	I0408 10:53:13.658505    9084 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem (1082 bytes)
	I0408 10:53:13.658728    9084 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem, removing ...
	I0408 10:53:13.658733    9084 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem
	I0408 10:53:13.658795    9084 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem (1123 bytes)
	I0408 10:53:13.658935    9084 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem, removing ...
	I0408 10:53:13.658939    9084 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem
	I0408 10:53:13.658995    9084 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem (1675 bytes)
	I0408 10:53:13.659114    9084 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-476000 san=[127.0.0.1 localhost minikube stopped-upgrade-476000]
	I0408 10:53:13.702988    9084 provision.go:177] copyRemoteCerts
	I0408 10:53:13.703026    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 10:53:13.703032    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:53:13.736627    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 10:53:13.743433    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 10:53:13.749956    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 10:53:13.757393    9084 provision.go:87] duration metric: took 99.088875ms to configureAuth
	I0408 10:53:13.757402    9084 buildroot.go:189] setting minikube options for container-runtime
	I0408 10:53:13.757522    9084 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:53:13.757558    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.757647    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.757652    9084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 10:53:13.819282    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 10:53:13.819289    9084 buildroot.go:70] root file system type: tmpfs
	I0408 10:53:13.819342    9084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 10:53:13.819391    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.819504    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.819539    9084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 10:53:13.882506    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 10:53:13.882552    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.882674    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.882683    9084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 10:53:14.253880    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 10:53:14.253893    9084 machine.go:97] duration metric: took 822.128084ms to provisionDockerMachine
	I0408 10:53:14.253899    9084 start.go:293] postStartSetup for "stopped-upgrade-476000" (driver="qemu2")
	I0408 10:53:14.253914    9084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 10:53:14.253991    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 10:53:14.254000    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:53:14.288755    9084 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 10:53:14.290065    9084 info.go:137] Remote host: Buildroot 2021.02.12
	I0408 10:53:14.290076    9084 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18585-6624/.minikube/addons for local assets ...
	I0408 10:53:14.290156    9084 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18585-6624/.minikube/files for local assets ...
	I0408 10:53:14.290267    9084 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem -> 70432.pem in /etc/ssl/certs
	I0408 10:53:14.290388    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 10:53:14.293411    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem --> /etc/ssl/certs/70432.pem (1708 bytes)
	I0408 10:53:14.300606    9084 start.go:296] duration metric: took 46.693458ms for postStartSetup
	I0408 10:53:14.300622    9084 fix.go:56] duration metric: took 21.623118917s for fixHost
	I0408 10:53:14.300656    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:14.300755    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:14.300762    9084 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 10:53:14.358564    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712598794.516776629
	
	I0408 10:53:14.358571    9084 fix.go:216] guest clock: 1712598794.516776629
	I0408 10:53:14.358575    9084 fix.go:229] Guest: 2024-04-08 10:53:14.516776629 -0700 PDT Remote: 2024-04-08 10:53:14.300624 -0700 PDT m=+21.748189376 (delta=216.152629ms)
	I0408 10:53:14.358585    9084 fix.go:200] guest clock delta is within tolerance: 216.152629ms
	I0408 10:53:14.358588    9084 start.go:83] releasing machines lock for "stopped-upgrade-476000", held for 21.681092417s
	I0408 10:53:14.358655    9084 ssh_runner.go:195] Run: cat /version.json
	I0408 10:53:14.358658    9084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 10:53:14.358664    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:53:14.358675    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	W0408 10:53:14.359242    9084 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51442: connect: connection refused
	I0408 10:53:14.359269    9084 retry.go:31] will retry after 374.088625ms: dial tcp [::1]:51442: connect: connection refused
	W0408 10:53:14.387763    9084 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0408 10:53:14.387819    9084 ssh_runner.go:195] Run: systemctl --version
	I0408 10:53:14.389534    9084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 10:53:14.391196    9084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 10:53:14.391224    9084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0408 10:53:14.394027    9084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0408 10:53:14.399037    9084 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 10:53:14.399045    9084 start.go:494] detecting cgroup driver to use...
	I0408 10:53:14.399121    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 10:53:14.405992    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0408 10:53:14.409560    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 10:53:14.412870    9084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 10:53:14.412897    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 10:53:14.416290    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 10:53:14.419055    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 10:53:14.421953    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 10:53:14.425258    9084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 10:53:14.428834    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 10:53:14.431899    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 10:53:14.434526    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 10:53:14.437639    9084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 10:53:14.440795    9084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 10:53:14.443406    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:14.523245    9084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 10:53:14.528901    9084 start.go:494] detecting cgroup driver to use...
	I0408 10:53:14.528959    9084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 10:53:14.534883    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 10:53:14.540572    9084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 10:53:14.548860    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 10:53:14.553331    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 10:53:14.558019    9084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 10:53:14.617399    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 10:53:14.622690    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 10:53:14.628689    9084 ssh_runner.go:195] Run: which cri-dockerd
	I0408 10:53:14.629889    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 10:53:14.632690    9084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0408 10:53:14.637535    9084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 10:53:14.715819    9084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 10:53:14.796133    9084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 10:53:14.796197    9084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 10:53:14.801878    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:14.878019    9084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 10:53:16.032286    9084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154245917s)
	I0408 10:53:16.032364    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 10:53:16.037429    9084 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0408 10:53:16.042586    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 10:53:16.047469    9084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 10:53:16.123738    9084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 10:53:16.204605    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:16.279773    9084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 10:53:16.285349    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 10:53:16.289850    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:16.370647    9084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 10:53:16.411442    9084 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 10:53:16.411530    9084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 10:53:16.415036    9084 start.go:562] Will wait 60s for crictl version
	I0408 10:53:16.415095    9084 ssh_runner.go:195] Run: which crictl
	I0408 10:53:16.416417    9084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 10:53:16.431128    9084 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0408 10:53:16.431197    9084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 10:53:16.448055    9084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 10:53:16.472638    9084 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0408 10:53:16.472704    9084 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0408 10:53:16.473952    9084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 10:53:16.477797    9084 kubeadm.go:877] updating cluster {Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0408 10:53:16.477846    9084 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 10:53:16.477886    9084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 10:53:16.489035    9084 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 10:53:16.489045    9084 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 10:53:16.489095    9084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 10:53:16.492383    9084 ssh_runner.go:195] Run: which lz4
	I0408 10:53:16.493697    9084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 10:53:16.494802    9084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 10:53:16.494814    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0408 10:53:17.251143    9084 docker.go:649] duration metric: took 757.476417ms to copy over tarball
	I0408 10:53:17.251215    9084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 10:53:16.751262    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:16.751372    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:16.763835    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:16.763914    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:16.775954    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:16.776031    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:16.787841    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:16.787913    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:16.799786    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:16.799862    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:16.811487    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:16.811563    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:16.823633    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:16.823712    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:16.835572    8917 logs.go:276] 0 containers: []
	W0408 10:53:16.835583    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:16.835650    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:16.847706    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:16.847727    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:16.847733    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:16.861167    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:16.861181    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:16.908608    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:16.908624    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:16.923781    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:16.923795    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:16.943318    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:16.943332    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:16.956201    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:16.956215    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:16.974243    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:16.974257    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:16.986959    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:16.986974    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:17.029663    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:17.029680    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:17.034836    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:17.034848    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:17.051156    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:17.051202    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:17.075884    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:17.075898    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:17.089233    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:17.089246    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:17.115153    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:17.115173    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:17.134325    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:17.134341    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:17.151099    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:17.151111    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:17.170372    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:17.170384    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:18.424761    9084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.173523375s)
	I0408 10:53:18.424773    9084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 10:53:18.440203    9084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 10:53:18.442935    9084 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0408 10:53:18.447945    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:18.515788    9084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 10:53:20.061243    9084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.545428292s)
	I0408 10:53:20.061332    9084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 10:53:20.076424    9084 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 10:53:20.076439    9084 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 10:53:20.076444    9084 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 10:53:20.083205    9084 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.083202    9084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.083298    9084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.083429    9084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.083476    9084 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 10:53:20.083529    9084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.083720    9084 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.084038    9084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.093093    9084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.093168    9084 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 10:53:20.093930    9084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.093944    9084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.094016    9084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.094034    9084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.094054    9084 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.094132    9084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.479844    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 10:53:20.495484    9084 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0408 10:53:20.495506    9084 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0408 10:53:20.495554    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0408 10:53:20.504624    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.505760    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 10:53:20.505850    9084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0408 10:53:20.514362    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.515572    9084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0408 10:53:20.515591    9084 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.515598    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0408 10:53:20.515619    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0408 10:53:20.515631    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.520364    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.530224    9084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0408 10:53:20.530245    9084 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.530310    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.534854    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0408 10:53:20.538033    9084 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 10:53:20.538046    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0408 10:53:20.541059    9084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0408 10:53:20.541081    9084 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.541141    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.545938    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0408 10:53:20.552236    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.579693    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0408 10:53:20.579734    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0408 10:53:20.579779    9084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0408 10:53:20.579796    9084 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.579841    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0408 10:53:20.588371    9084 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 10:53:20.588499    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.589815    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0408 10:53:20.599263    9084 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0408 10:53:20.599290    9084 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.599360    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.609421    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 10:53:20.609545    9084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0408 10:53:20.610907    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0408 10:53:20.610920    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0408 10:53:20.642983    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.645612    9084 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 10:53:20.645630    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0408 10:53:20.653196    9084 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0408 10:53:20.653218    9084 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.653272    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.689328    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 10:53:20.689393    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 10:53:20.689486    9084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0408 10:53:20.691006    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0408 10:53:20.691019    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0408 10:53:20.834015    9084 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 10:53:20.834118    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.857868    9084 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0408 10:53:20.857894    9084 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.857950    9084 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.871711    9084 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 10:53:20.871728    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0408 10:53:20.879249    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 10:53:20.879371    9084 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 10:53:21.019587    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 10:53:21.019634    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0408 10:53:21.019661    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0408 10:53:21.047293    9084 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 10:53:21.047309    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0408 10:53:21.285508    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 10:53:21.285549    9084 cache_images.go:92] duration metric: took 1.20909225s to LoadCachedImages
	W0408 10:53:21.285587    9084 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0408 10:53:21.285600    9084 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0408 10:53:21.285656    9084 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 10:53:21.285720    9084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 10:53:21.299346    9084 cni.go:84] Creating CNI manager for ""
	I0408 10:53:21.299360    9084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:53:21.299365    9084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 10:53:21.299374    9084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-476000 NodeName:stopped-upgrade-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 10:53:21.299444    9084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-476000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 10:53:21.299514    9084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0408 10:53:21.302278    9084 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 10:53:21.302304    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 10:53:21.305184    9084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0408 10:53:21.310073    9084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 10:53:21.314981    9084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0408 10:53:21.320237    9084 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0408 10:53:21.321355    9084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 10:53:21.324744    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:21.391192    9084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:53:21.406274    9084 certs.go:68] Setting up /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000 for IP: 10.0.2.15
	I0408 10:53:21.406284    9084 certs.go:194] generating shared ca certs ...
	I0408 10:53:21.406293    9084 certs.go:226] acquiring lock for ca certs: {Name:mkfcdee1cac51c6f74fa377d8d75e68d66123e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.406452    9084 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.key
	I0408 10:53:21.406501    9084 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.key
	I0408 10:53:21.406506    9084 certs.go:256] generating profile certs ...
	I0408 10:53:21.406604    9084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.key
	I0408 10:53:21.406621    9084 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07
	I0408 10:53:21.406643    9084 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0408 10:53:21.503350    9084 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07 ...
	I0408 10:53:21.503366    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07: {Name:mk157ba66346fcfc45e97c4ae63aceb5f9cbdb80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.503698    9084 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07 ...
	I0408 10:53:21.503704    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07: {Name:mk4f23acaf4862cb3acdffcb9c85638e6ba51c52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.503836    9084 certs.go:381] copying /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt
	I0408 10:53:21.503955    9084 certs.go:385] copying /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key
	I0408 10:53:21.504094    9084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/proxy-client.key
	I0408 10:53:21.504219    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043.pem (1338 bytes)
	W0408 10:53:21.504246    9084 certs.go:480] ignoring /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043_empty.pem, impossibly tiny 0 bytes
	I0408 10:53:21.504250    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 10:53:21.504268    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem (1082 bytes)
	I0408 10:53:21.504289    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem (1123 bytes)
	I0408 10:53:21.504306    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem (1675 bytes)
	I0408 10:53:21.504342    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem (1708 bytes)
	I0408 10:53:21.504667    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 10:53:21.511520    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 10:53:21.518160    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 10:53:21.525469    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 10:53:21.534769    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 10:53:21.541760    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 10:53:21.549220    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 10:53:21.555842    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 10:53:21.562361    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem --> /usr/share/ca-certificates/70432.pem (1708 bytes)
	I0408 10:53:21.569482    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 10:53:21.576292    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043.pem --> /usr/share/ca-certificates/7043.pem (1338 bytes)
	I0408 10:53:21.582771    9084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 10:53:21.587705    9084 ssh_runner.go:195] Run: openssl version
	I0408 10:53:21.589418    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7043.pem && ln -fs /usr/share/ca-certificates/7043.pem /etc/ssl/certs/7043.pem"
	I0408 10:53:21.592648    9084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7043.pem
	I0408 10:53:21.594034    9084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 17:36 /usr/share/ca-certificates/7043.pem
	I0408 10:53:21.594051    9084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7043.pem
	I0408 10:53:21.595877    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7043.pem /etc/ssl/certs/51391683.0"
	I0408 10:53:21.598527    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70432.pem && ln -fs /usr/share/ca-certificates/70432.pem /etc/ssl/certs/70432.pem"
	I0408 10:53:21.601727    9084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70432.pem
	I0408 10:53:21.603131    9084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 17:36 /usr/share/ca-certificates/70432.pem
	I0408 10:53:21.603146    9084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70432.pem
	I0408 10:53:21.604802    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70432.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 10:53:21.607588    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 10:53:21.610333    9084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:53:21.611675    9084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 17:49 /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:53:21.611692    9084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:53:21.613272    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 10:53:21.616229    9084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 10:53:21.617546    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 10:53:21.619458    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 10:53:21.621135    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 10:53:21.622941    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 10:53:21.624622    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 10:53:21.626305    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 10:53:21.628180    9084 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:53:21.628241    9084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 10:53:21.638560    9084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 10:53:21.641730    9084 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 10:53:21.641737    9084 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 10:53:21.641740    9084 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 10:53:21.641765    9084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 10:53:21.645114    9084 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:53:21.645410    9084 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-476000" does not appear in /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:53:21.645503    9084 kubeconfig.go:62] /Users/jenkins/minikube-integration/18585-6624/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-476000" cluster setting kubeconfig missing "stopped-upgrade-476000" context setting]
	I0408 10:53:21.645694    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.646112    9084 kapi.go:59] client config for stopped-upgrade-476000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042e3a70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:53:21.646414    9084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 10:53:21.649269    9084 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-476000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0408 10:53:21.649275    9084 kubeadm.go:1154] stopping kube-system containers ...
	I0408 10:53:21.649311    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 10:53:21.659773    9084 docker.go:483] Stopping containers: [3fcb068b7c04 57d2272b22f0 45e06afd7b3e d3d7a66c7373 c3d8e8e2e6e0 b25ec593bc5b 94347bae0439 f8feaed80a64]
	I0408 10:53:21.659836    9084 ssh_runner.go:195] Run: docker stop 3fcb068b7c04 57d2272b22f0 45e06afd7b3e d3d7a66c7373 c3d8e8e2e6e0 b25ec593bc5b 94347bae0439 f8feaed80a64
	I0408 10:53:21.670456    9084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 10:53:21.676278    9084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:53:21.678875    9084 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 10:53:21.678880    9084 kubeadm.go:156] found existing configuration files:
	
	I0408 10:53:21.678901    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0408 10:53:21.681683    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 10:53:21.681704    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:53:21.684624    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0408 10:53:21.686888    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 10:53:21.686908    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:53:21.689896    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0408 10:53:21.692877    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 10:53:21.692901    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:53:21.695459    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0408 10:53:21.697963    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 10:53:21.697991    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:53:21.700934    9084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:53:21.703503    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:21.725160    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.133590    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.276650    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.298064    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.317886    9084 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:53:22.317967    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:19.690152    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:22.820366    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:23.320084    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:23.331213    9084 api_server.go:72] duration metric: took 1.01332225s to wait for apiserver process to appear ...
	I0408 10:53:23.331228    9084 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:53:23.331236    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:24.692463    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:24.692770    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:24.722308    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:24.722442    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:24.741782    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:24.741875    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:24.755492    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:24.755574    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:24.769812    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:24.769881    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:24.784590    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:24.784652    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:24.799405    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:24.799479    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:24.814749    8917 logs.go:276] 0 containers: []
	W0408 10:53:24.814762    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:24.814829    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:24.829521    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:24.829554    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:24.829562    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:24.868479    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:24.868490    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:24.904125    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:24.904139    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:24.927469    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:24.927479    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:24.938655    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:24.938664    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:24.952792    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:24.952803    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:24.970410    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:24.970424    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:24.984754    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:24.984769    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:24.998533    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:24.998546    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:25.018288    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:25.018300    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:25.033027    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:25.033041    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:25.052559    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:25.052573    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:25.065375    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:25.065389    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:25.090642    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:25.090659    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:25.095709    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:25.095721    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:25.108690    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:25.108703    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:25.121341    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:25.121353    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:27.635548    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:28.333374    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:28.333396    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:32.638194    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:32.638364    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:32.654265    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:32.654336    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:32.665869    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:32.665943    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:32.676691    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:32.676754    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:32.687276    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:32.687348    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:32.697736    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:32.697809    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:32.708110    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:32.708175    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:32.720950    8917 logs.go:276] 0 containers: []
	W0408 10:53:32.720962    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:32.721020    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:32.735765    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:32.735783    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:32.735788    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:32.774990    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:32.775000    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:32.779889    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:32.779897    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:32.793627    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:32.793638    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:32.809940    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:32.809950    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:32.822265    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:32.822276    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:32.839938    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:32.839949    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:32.851091    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:32.851107    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:32.862533    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:32.862544    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:32.884489    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:32.884507    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:32.896171    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:32.896184    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:32.913608    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:32.913619    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:32.925160    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:32.925171    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:33.333678    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:33.333725    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:32.960430    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:32.962011    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:32.976521    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:32.976534    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:33.000986    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:33.000998    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:33.014728    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:33.014740    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:35.532844    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:38.334109    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:38.334160    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:40.535108    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:40.535219    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:53:40.546884    8917 logs.go:276] 2 containers: [ac154c02908e ace76edac2a1]
	I0408 10:53:40.546962    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:53:40.567370    8917 logs.go:276] 2 containers: [4feb9e612722 80da0ca46341]
	I0408 10:53:40.567440    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:53:40.582484    8917 logs.go:276] 1 containers: [016dbd4230f7]
	I0408 10:53:40.582578    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:53:40.593220    8917 logs.go:276] 2 containers: [4ade28b41d9a 1df516cfd59e]
	I0408 10:53:40.593293    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:53:40.603951    8917 logs.go:276] 1 containers: [c2ca39c79143]
	I0408 10:53:40.604021    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:53:40.615376    8917 logs.go:276] 2 containers: [54eeec63d0a2 51867dc58ea1]
	I0408 10:53:40.615450    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:53:40.626399    8917 logs.go:276] 0 containers: []
	W0408 10:53:40.626411    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:53:40.626474    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:53:40.638045    8917 logs.go:276] 2 containers: [570ed91ec9a5 c0e2cbf4890d]
	I0408 10:53:40.638064    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:53:40.638070    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:53:40.643136    8917 logs.go:123] Gathering logs for coredns [016dbd4230f7] ...
	I0408 10:53:40.643148    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 016dbd4230f7"
	I0408 10:53:40.654876    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:53:40.654891    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:53:40.695334    8917 logs.go:123] Gathering logs for etcd [4feb9e612722] ...
	I0408 10:53:40.695352    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4feb9e612722"
	I0408 10:53:40.715114    8917 logs.go:123] Gathering logs for etcd [80da0ca46341] ...
	I0408 10:53:40.715126    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80da0ca46341"
	I0408 10:53:40.738680    8917 logs.go:123] Gathering logs for kube-proxy [c2ca39c79143] ...
	I0408 10:53:40.738692    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2ca39c79143"
	I0408 10:53:40.751048    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:53:40.751059    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:53:40.775178    8917 logs.go:123] Gathering logs for kube-apiserver [ac154c02908e] ...
	I0408 10:53:40.775195    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac154c02908e"
	I0408 10:53:40.790410    8917 logs.go:123] Gathering logs for kube-scheduler [4ade28b41d9a] ...
	I0408 10:53:40.790423    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ade28b41d9a"
	I0408 10:53:40.801955    8917 logs.go:123] Gathering logs for kube-scheduler [1df516cfd59e] ...
	I0408 10:53:40.801966    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1df516cfd59e"
	I0408 10:53:40.816907    8917 logs.go:123] Gathering logs for kube-controller-manager [51867dc58ea1] ...
	I0408 10:53:40.816918    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51867dc58ea1"
	I0408 10:53:40.828823    8917 logs.go:123] Gathering logs for storage-provisioner [570ed91ec9a5] ...
	I0408 10:53:40.828839    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570ed91ec9a5"
	I0408 10:53:40.841345    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:53:40.841357    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:53:40.853156    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:53:40.853169    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:53:40.892586    8917 logs.go:123] Gathering logs for kube-apiserver [ace76edac2a1] ...
	I0408 10:53:40.892598    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace76edac2a1"
	I0408 10:53:40.915862    8917 logs.go:123] Gathering logs for kube-controller-manager [54eeec63d0a2] ...
	I0408 10:53:40.915873    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54eeec63d0a2"
	I0408 10:53:40.933110    8917 logs.go:123] Gathering logs for storage-provisioner [c0e2cbf4890d] ...
	I0408 10:53:40.933121    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0e2cbf4890d"
	I0408 10:53:43.334779    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:43.334826    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:43.446683    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:48.449002    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:48.449063    8917 kubeadm.go:591] duration metric: took 4m4.196734834s to restartPrimaryControlPlane
	W0408 10:53:48.449121    8917 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 10:53:48.449151    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 10:53:49.434095    8917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 10:53:49.439180    8917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:53:49.441974    8917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:53:49.444798    8917 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 10:53:49.444803    8917 kubeadm.go:156] found existing configuration files:
	
	I0408 10:53:49.444825    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/admin.conf
	I0408 10:53:49.447747    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 10:53:49.447773    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:53:49.450272    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/kubelet.conf
	I0408 10:53:49.453293    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 10:53:49.453318    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:53:49.456382    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/controller-manager.conf
	I0408 10:53:49.459040    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 10:53:49.459061    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:53:49.461768    8917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/scheduler.conf
	I0408 10:53:49.464758    8917 kubeadm.go:162] "https://control-plane.minikube.internal:51288" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51288 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 10:53:49.464779    8917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:53:49.467626    8917 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 10:53:49.486204    8917 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 10:53:49.486269    8917 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 10:53:49.536043    8917 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 10:53:49.536104    8917 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 10:53:49.536173    8917 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 10:53:49.585483    8917 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 10:53:49.589688    8917 out.go:204]   - Generating certificates and keys ...
	I0408 10:53:49.589721    8917 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 10:53:49.589750    8917 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 10:53:49.589794    8917 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 10:53:49.589838    8917 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 10:53:49.589876    8917 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 10:53:49.589911    8917 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 10:53:49.589943    8917 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 10:53:49.589980    8917 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 10:53:49.590031    8917 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 10:53:49.590069    8917 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 10:53:49.590086    8917 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 10:53:49.590112    8917 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 10:53:49.614914    8917 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 10:53:49.759706    8917 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 10:53:49.845325    8917 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 10:53:49.923793    8917 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 10:53:49.955317    8917 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 10:53:49.955624    8917 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 10:53:49.955649    8917 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 10:53:50.037891    8917 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 10:53:48.335468    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:48.335501    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:50.042015    8917 out.go:204]   - Booting up control plane ...
	I0408 10:53:50.042159    8917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 10:53:50.042199    8917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 10:53:50.043278    8917 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 10:53:50.043655    8917 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 10:53:50.044606    8917 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 10:53:54.546607    8917 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501536 seconds
	I0408 10:53:54.546673    8917 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 10:53:54.550036    8917 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 10:53:55.065727    8917 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 10:53:55.066012    8917 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-603000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 10:53:55.570721    8917 kubeadm.go:309] [bootstrap-token] Using token: lglln4.ya05jpnnv5na7a65
	I0408 10:53:55.576359    8917 out.go:204]   - Configuring RBAC rules ...
	I0408 10:53:55.576422    8917 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 10:53:55.576481    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 10:53:55.580905    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 10:53:55.581777    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 10:53:55.582521    8917 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 10:53:55.583418    8917 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 10:53:55.586474    8917 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 10:53:55.770007    8917 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 10:53:55.979154    8917 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 10:53:55.979635    8917 kubeadm.go:309] 
	I0408 10:53:55.979665    8917 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 10:53:55.979669    8917 kubeadm.go:309] 
	I0408 10:53:55.979705    8917 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 10:53:55.979708    8917 kubeadm.go:309] 
	I0408 10:53:55.979721    8917 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 10:53:55.979749    8917 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 10:53:55.979784    8917 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 10:53:55.979789    8917 kubeadm.go:309] 
	I0408 10:53:55.979814    8917 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 10:53:55.979817    8917 kubeadm.go:309] 
	I0408 10:53:55.979844    8917 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 10:53:55.979848    8917 kubeadm.go:309] 
	I0408 10:53:55.979873    8917 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 10:53:55.979917    8917 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 10:53:55.979961    8917 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 10:53:55.979965    8917 kubeadm.go:309] 
	I0408 10:53:55.980011    8917 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 10:53:55.980097    8917 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 10:53:55.980102    8917 kubeadm.go:309] 
	I0408 10:53:55.980153    8917 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token lglln4.ya05jpnnv5na7a65 \
	I0408 10:53:55.980258    8917 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 \
	I0408 10:53:55.980269    8917 kubeadm.go:309] 	--control-plane 
	I0408 10:53:55.980272    8917 kubeadm.go:309] 
	I0408 10:53:55.980322    8917 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 10:53:55.980327    8917 kubeadm.go:309] 
	I0408 10:53:55.980366    8917 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token lglln4.ya05jpnnv5na7a65 \
	I0408 10:53:55.980423    8917 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 
	I0408 10:53:55.980489    8917 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 10:53:55.980496    8917 cni.go:84] Creating CNI manager for ""
	I0408 10:53:55.980503    8917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:53:55.987748    8917 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 10:53:55.991767    8917 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 10:53:55.995221    8917 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 10:53:56.000678    8917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 10:53:56.000735    8917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 10:53:56.000736    8917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-603000 minikube.k8s.io/updated_at=2024_04_08T10_53_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=running-upgrade-603000 minikube.k8s.io/primary=true
	I0408 10:53:56.035454    8917 kubeadm.go:1107] duration metric: took 34.768084ms to wait for elevateKubeSystemPrivileges
	I0408 10:53:56.045083    8917 ops.go:34] apiserver oom_adj: -16
	W0408 10:53:56.045108    8917 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 10:53:56.045118    8917 kubeadm.go:393] duration metric: took 4m11.806888917s to StartCluster
	I0408 10:53:56.045127    8917 settings.go:142] acquiring lock: {Name:mk6ed0f877152c89dfeb4cfbed60423b324ecbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:56.045294    8917 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:53:56.045730    8917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:56.045922    8917 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:53:56.050680    8917 out.go:177] * Verifying Kubernetes components...
	I0408 10:53:56.045947    8917 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 10:53:56.046113    8917 config.go:182] Loaded profile config "running-upgrade-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:53:56.057723    8917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:56.057744    8917 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-603000"
	I0408 10:53:56.057763    8917 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-603000"
	W0408 10:53:56.057770    8917 addons.go:243] addon storage-provisioner should already be in state true
	I0408 10:53:56.057787    8917 host.go:66] Checking if "running-upgrade-603000" exists ...
	I0408 10:53:56.057745    8917 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-603000"
	I0408 10:53:56.057800    8917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-603000"
	I0408 10:53:56.058905    8917 kapi.go:59] client config for running-upgrade-603000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/running-upgrade-603000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10604fa70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:53:56.059250    8917 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-603000"
	W0408 10:53:56.059255    8917 addons.go:243] addon default-storageclass should already be in state true
	I0408 10:53:56.059261    8917 host.go:66] Checking if "running-upgrade-603000" exists ...
	I0408 10:53:56.063680    8917 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:53.336326    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:53.336347    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:56.066784    8917 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:53:56.066790    8917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 10:53:56.066796    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:53:56.067608    8917 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 10:53:56.067611    8917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 10:53:56.067615    8917 sshutil.go:53] new ssh client: &{IP:localhost Port:51256 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/running-upgrade-603000/id_rsa Username:docker}
	I0408 10:53:56.156097    8917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:53:56.161096    8917 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:53:56.161134    8917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:56.164902    8917 api_server.go:72] duration metric: took 118.96625ms to wait for apiserver process to appear ...
	I0408 10:53:56.164910    8917 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:53:56.164918    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:56.218021    8917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:53:56.219158    8917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 10:53:58.337359    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:58.337432    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:01.167112    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:01.167144    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:03.338308    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:03.338404    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:06.167502    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:06.167549    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:08.340836    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:08.340859    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:11.167970    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:11.168006    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:13.342295    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:13.342343    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:16.168524    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:16.168578    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:18.344734    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:18.344766    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:21.169327    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:21.169346    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:26.170176    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:26.170220    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 10:54:26.560534    8917 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 10:54:26.565911    8917 out.go:177] * Enabled addons: storage-provisioner
	I0408 10:54:23.346107    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:23.346387    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:23.372684    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:23.372790    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:23.388871    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:23.388953    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:23.401701    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:23.401777    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:23.412860    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:23.412944    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:23.422670    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:23.422736    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:23.434098    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:23.434170    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:23.444453    9084 logs.go:276] 0 containers: []
	W0408 10:54:23.444470    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:23.444522    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:23.454687    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:23.454703    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:23.454708    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:23.471849    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:23.471860    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:23.483521    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:23.483534    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:23.523199    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:23.523210    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:23.535677    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:23.535691    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:23.551159    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:23.551171    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:23.562707    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:23.562719    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:23.588624    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:23.588635    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:23.629875    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:23.629886    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:23.649502    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:23.649519    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:23.664613    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:23.664627    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:23.675819    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:23.675830    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:23.687843    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:23.687853    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:23.692446    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:23.692455    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:23.804585    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:23.804599    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:23.818851    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:23.818862    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:26.333686    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:26.577842    8917 addons.go:505] duration metric: took 30.531701833s for enable addons: enabled=[storage-provisioner]
	I0408 10:54:31.334069    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:31.334323    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:31.371242    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:31.371401    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:31.390676    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:31.390776    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:31.404629    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:31.404723    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:31.416966    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:31.417042    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:31.428429    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:31.428489    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:31.438836    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:31.438905    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:31.450262    9084 logs.go:276] 0 containers: []
	W0408 10:54:31.450273    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:31.450340    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:31.460764    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:31.460780    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:31.460796    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:31.464760    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:31.464770    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:31.500015    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:31.500029    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:31.515023    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:31.515033    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:31.539339    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:31.539348    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:31.550865    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:31.550876    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:31.591633    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:31.591649    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:31.604029    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:31.604040    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:31.621850    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:31.621861    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:31.635376    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:31.635390    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:31.649656    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:31.649666    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:31.663917    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:31.663931    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:31.677567    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:31.677580    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:31.719049    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:31.719064    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:31.731411    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:31.731422    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:31.748753    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:31.748766    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:31.171627    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:31.171679    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:34.262619    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:36.173249    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:36.173269    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:39.264456    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:39.264697    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:39.281980    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:39.282081    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:39.295774    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:39.295863    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:39.309500    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:39.309580    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:39.323692    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:39.323761    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:39.334363    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:39.334433    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:39.344540    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:39.344611    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:39.354668    9084 logs.go:276] 0 containers: []
	W0408 10:54:39.354679    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:39.354742    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:39.367225    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:39.367246    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:39.367251    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:39.380935    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:39.380945    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:39.392593    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:39.392603    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:39.410552    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:39.410563    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:39.434507    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:39.434518    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:39.448209    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:39.448218    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:39.459807    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:39.459818    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:39.475562    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:39.475580    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:39.487689    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:39.487709    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:39.499810    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:39.499819    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:39.538987    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:39.539004    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:39.543662    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:39.543686    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:39.581797    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:39.581815    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:39.593337    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:39.593353    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:39.632209    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:39.632221    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:39.646678    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:39.646692    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:42.162542    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:41.174327    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:41.174368    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:47.165185    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:47.165386    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:47.177576    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:47.177669    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:47.187935    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:47.188004    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:47.198159    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:47.198228    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:47.211431    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:47.211505    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:47.226247    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:47.226312    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:47.236942    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:47.237024    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:47.249342    9084 logs.go:276] 0 containers: []
	W0408 10:54:47.249356    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:47.249426    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:47.259606    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:47.259624    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:47.259629    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:47.275415    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:47.275426    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:47.317378    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:47.317388    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:47.331947    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:47.331957    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:47.345946    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:47.345957    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:47.359309    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:47.359322    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:47.371082    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:47.371092    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:47.409387    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:47.409396    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:47.413581    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:47.413587    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:47.428518    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:47.428528    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:47.440195    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:47.440205    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:47.464438    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:47.464450    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:47.479195    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:47.479205    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:47.493644    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:47.493654    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:47.505402    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:47.505412    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:47.548107    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:47.548119    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:46.176609    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:46.176662    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:50.067306    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:51.178982    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:51.179058    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:55.069968    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:55.070472    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:55.108688    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:55.108814    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:55.130103    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:55.130232    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:55.147189    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:55.147261    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:55.163458    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:55.163537    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:55.174175    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:55.174249    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:55.184859    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:55.184924    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:55.195346    9084 logs.go:276] 0 containers: []
	W0408 10:54:55.195361    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:55.195421    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:55.205551    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:55.205567    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:55.205572    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:55.217322    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:55.217333    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:55.241517    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:55.241524    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:55.256757    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:55.256769    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:55.275007    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:55.275018    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:55.286865    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:55.286879    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:55.298909    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:55.298919    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:55.311334    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:55.311345    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:55.325200    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:55.325209    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:55.361897    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:55.361907    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:55.366602    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:55.366610    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:55.403655    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:55.403667    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:55.417584    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:55.417594    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:55.436919    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:55.436929    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:55.474545    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:55.474555    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:55.489204    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:55.489215    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:56.181568    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:56.181773    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:56.202682    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:54:56.202767    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:56.214250    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:54:56.214323    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:56.225555    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:54:56.225628    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:56.235762    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:54:56.235837    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:56.246204    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:54:56.246271    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:56.261485    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:54:56.261565    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:56.272710    8917 logs.go:276] 0 containers: []
	W0408 10:54:56.272727    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:56.272791    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:56.283969    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:54:56.283989    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:54:56.283997    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:54:56.296128    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:54:56.296139    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:54:56.307420    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:54:56.307433    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:54:56.322755    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:56.322769    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:56.356407    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:56.356418    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:56.361049    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:56.361058    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:56.398197    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:54:56.398210    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:54:56.413653    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:54:56.413665    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:54:56.428432    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:54:56.428445    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:54:56.440438    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:54:56.440450    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:56.451747    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:54:56.451758    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:54:56.469011    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:54:56.469022    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:54:56.480265    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:56.480275    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:58.003264    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:59.007435    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:03.005911    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:03.006284    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:03.039309    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:03.039446    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:03.057761    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:03.057848    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:03.082004    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:03.082087    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:03.093327    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:03.093397    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:03.103605    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:03.103676    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:03.114428    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:03.114496    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:03.124664    9084 logs.go:276] 0 containers: []
	W0408 10:55:03.124675    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:03.124738    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:03.135279    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:03.135297    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:03.135302    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:03.153267    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:03.153279    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:03.177170    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:03.177178    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:03.188564    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:03.188574    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:03.202640    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:03.202650    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:03.214102    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:03.214114    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:03.228895    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:03.228906    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:03.240131    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:03.240143    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:03.254464    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:03.254474    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:03.266342    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:03.266351    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:03.303547    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:03.303562    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:03.342461    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:03.342478    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:03.354433    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:03.354445    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:03.359253    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:03.359261    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:03.373241    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:03.373251    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:03.410122    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:03.410133    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:05.925775    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:04.010181    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:04.010349    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:04.025195    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:04.025280    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:04.044954    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:04.045031    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:04.056616    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:04.056686    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:04.066919    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:04.066986    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:04.077067    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:04.077143    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:04.087668    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:04.087739    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:04.097694    8917 logs.go:276] 0 containers: []
	W0408 10:55:04.097704    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:04.097764    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:04.108100    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:04.108115    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:04.108121    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:04.123292    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:04.123303    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:04.158146    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:04.158154    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:04.195550    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:04.195563    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:04.210253    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:04.210264    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:04.224456    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:04.224469    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:04.241595    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:04.241606    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:04.252908    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:04.252920    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:04.278672    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:04.278681    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:04.290385    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:04.290396    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:04.294934    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:04.294941    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:04.306145    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:04.306156    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:04.318382    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:04.318394    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:06.834679    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:10.928208    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:10.928385    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:10.943391    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:10.943498    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:10.955006    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:10.955072    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:10.965199    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:10.965265    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:10.976203    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:10.976277    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:10.986779    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:10.986841    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:10.997121    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:10.997185    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:11.007337    9084 logs.go:276] 0 containers: []
	W0408 10:55:11.007349    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:11.007411    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:11.017718    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:11.017735    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:11.017741    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:11.029688    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:11.029699    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:11.043819    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:11.043829    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:11.055420    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:11.055430    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:11.080056    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:11.080063    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:11.099898    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:11.099908    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:11.114062    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:11.114075    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:11.128974    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:11.128987    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:11.146387    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:11.146397    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:11.187358    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:11.187368    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:11.200981    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:11.200994    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:11.211916    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:11.211928    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:11.223581    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:11.223591    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:11.235021    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:11.235031    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:11.272573    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:11.272582    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:11.276786    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:11.276796    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:11.837248    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:11.837463    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:11.868164    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:11.868261    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:11.883214    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:11.883293    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:11.895638    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:11.895714    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:11.906465    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:11.906526    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:11.916806    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:11.916880    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:11.927205    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:11.927277    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:11.937465    8917 logs.go:276] 0 containers: []
	W0408 10:55:11.937475    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:11.937527    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:11.950524    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:11.950542    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:11.950547    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:11.967871    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:11.967882    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:11.991216    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:11.991224    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:12.025160    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:12.025171    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:12.029462    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:12.029471    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:12.063497    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:12.063511    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:12.078037    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:12.078048    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:12.089388    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:12.089400    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:12.100522    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:12.100532    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:12.114847    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:12.114860    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:12.131995    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:12.132007    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:12.147164    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:12.147175    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:12.158912    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:12.158926    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:13.815132    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:14.670665    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:18.818054    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:18.818792    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:18.861661    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:18.861773    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:18.880570    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:18.880659    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:18.895585    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:18.895665    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:18.907916    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:18.907983    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:18.925063    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:18.925141    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:18.937342    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:18.937418    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:18.947561    9084 logs.go:276] 0 containers: []
	W0408 10:55:18.947570    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:18.947622    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:18.958314    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:18.958333    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:18.958339    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:18.969940    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:18.969954    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:18.981749    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:18.981763    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:18.985755    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:18.985765    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:18.997900    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:18.997912    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:19.012509    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:19.012519    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:19.023974    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:19.023985    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:19.047774    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:19.047781    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:19.084655    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:19.084668    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:19.105018    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:19.105027    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:19.118479    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:19.118494    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:19.142732    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:19.142742    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:19.161522    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:19.161534    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:19.200711    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:19.200721    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:19.214597    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:19.214610    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:19.252068    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:19.252082    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:21.765195    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:19.671725    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:19.671968    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:19.700495    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:19.700617    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:19.716640    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:19.716728    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:19.730175    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:19.730255    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:19.741372    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:19.741443    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:19.751617    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:19.751690    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:19.761909    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:19.761979    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:19.772046    8917 logs.go:276] 0 containers: []
	W0408 10:55:19.772058    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:19.772119    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:19.782790    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:19.782804    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:19.782809    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:19.794584    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:19.794596    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:19.806080    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:19.806092    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:19.829120    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:19.829130    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:19.840224    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:19.840235    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:19.875599    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:19.875612    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:19.910084    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:19.910097    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:19.924675    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:19.924685    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:19.939109    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:19.939122    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:19.954092    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:19.954103    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:19.971982    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:19.971994    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:19.976507    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:19.976515    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:19.990659    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:19.990669    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:22.504304    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:26.767696    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:26.768060    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:26.801740    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:26.801904    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:26.819760    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:26.819852    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:26.833448    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:26.833518    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:26.844656    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:26.844732    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:26.855382    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:26.855463    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:26.866448    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:26.866514    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:26.877049    9084 logs.go:276] 0 containers: []
	W0408 10:55:26.877061    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:26.877114    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:26.887968    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:26.888011    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:26.888018    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:26.925511    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:26.925522    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:26.936814    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:26.936824    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:26.948688    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:26.948700    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:26.966338    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:26.966348    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:26.978123    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:26.978133    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:26.992014    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:26.992024    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:27.010215    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:27.010225    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:27.024571    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:27.024581    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:27.028787    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:27.028797    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:27.087517    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:27.087528    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:27.102752    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:27.102762    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:27.119001    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:27.119011    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:27.142248    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:27.142254    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:27.181980    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:27.181992    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:27.197760    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:27.197772    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:27.506603    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:27.506752    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:27.521309    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:27.521395    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:27.533101    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:27.533170    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:27.548667    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:27.548739    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:27.559123    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:27.559200    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:27.569689    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:27.569763    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:27.586672    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:27.586742    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:27.602587    8917 logs.go:276] 0 containers: []
	W0408 10:55:27.602599    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:27.602657    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:27.612818    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:27.612832    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:27.612839    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:27.646944    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:27.646957    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:27.662670    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:27.662682    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:27.676545    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:27.676556    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:27.691151    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:27.691162    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:27.709073    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:27.709084    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:27.720932    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:27.720943    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:27.755622    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:27.755634    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:27.767037    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:27.767048    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:27.778629    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:27.778642    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:27.789855    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:27.789866    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:27.813914    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:27.813924    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:27.824901    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:27.824914    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:29.713674    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:30.330223    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:34.716031    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:34.716227    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:34.732832    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:34.732916    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:34.746309    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:34.746384    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:34.758430    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:34.758496    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:34.769429    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:34.769495    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:34.780070    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:34.780145    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:34.794632    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:34.794706    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:34.804795    9084 logs.go:276] 0 containers: []
	W0408 10:55:34.804811    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:34.804866    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:34.815200    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:34.815218    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:34.815223    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:34.830207    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:34.830221    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:34.841487    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:34.841499    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:34.864486    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:34.864497    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:34.876034    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:34.876047    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:34.890144    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:34.890155    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:34.903382    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:34.903393    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:34.920640    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:34.920650    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:34.955600    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:34.955611    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:34.971849    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:34.971859    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:34.983386    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:34.983396    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:35.020995    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:35.021004    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:35.057032    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:35.057043    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:35.068660    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:35.068672    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:35.083042    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:35.083053    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:35.087618    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:35.087625    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:35.332865    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:35.333034    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:35.357611    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:35.357691    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:35.368661    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:35.368731    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:35.378907    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:35.378975    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:35.389861    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:35.389932    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:35.400222    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:35.400292    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:35.410860    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:35.410935    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:35.421245    8917 logs.go:276] 0 containers: []
	W0408 10:55:35.421256    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:35.421318    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:35.431964    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:35.431980    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:35.431986    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:35.466453    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:35.466466    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:35.481310    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:35.481322    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:35.495597    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:35.495607    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:35.508309    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:35.508323    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:35.521590    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:35.521600    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:35.539219    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:35.539230    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:35.573601    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:35.573609    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:35.577928    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:35.577935    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:35.589226    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:35.589238    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:35.601082    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:35.601094    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:35.624289    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:35.624296    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:35.638438    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:35.638450    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:37.601444    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:38.154864    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:42.603754    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:42.603923    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:42.619633    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:42.619715    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:42.629690    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:42.629763    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:42.640627    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:42.640698    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:42.652038    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:42.652110    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:42.662226    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:42.662290    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:42.689548    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:42.689618    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:42.700158    9084 logs.go:276] 0 containers: []
	W0408 10:55:42.700175    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:42.700237    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:42.710719    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:42.710738    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:42.710743    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:42.747176    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:42.747191    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:42.761665    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:42.761678    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:42.776860    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:42.776873    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:42.788347    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:42.788358    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:42.811647    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:42.811654    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:42.823229    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:42.823241    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:42.859790    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:42.859801    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:42.895157    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:42.895170    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:42.912589    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:42.912599    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:42.927328    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:42.927343    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:42.941717    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:42.941728    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:42.953236    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:42.953247    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:42.967422    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:42.967432    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:42.979263    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:42.979274    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:42.983940    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:42.983949    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:45.495341    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:43.157186    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:43.157346    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:43.177286    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:43.177390    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:43.192152    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:43.192231    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:43.204268    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:43.204346    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:43.215476    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:43.215539    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:43.226130    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:43.226221    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:43.237581    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:43.237648    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:43.247749    8917 logs.go:276] 0 containers: []
	W0408 10:55:43.247762    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:43.247817    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:43.258322    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:43.258337    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:43.258342    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:43.273671    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:43.273680    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:43.285531    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:43.285540    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:43.302032    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:43.302042    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:43.325275    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:43.325282    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:43.336411    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:43.336421    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:43.347792    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:43.347803    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:43.352777    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:43.352787    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:43.388146    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:43.388158    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:43.402631    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:43.402641    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:43.416468    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:43.416479    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:43.428336    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:43.428348    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:43.447758    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:43.447769    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:45.984379    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:50.498281    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:50.498654    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:50.535494    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:50.535609    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:50.553341    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:50.553434    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:50.567090    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:50.567163    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:50.580385    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:50.580449    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:50.590845    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:50.590916    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:50.601256    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:50.601333    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:50.614147    9084 logs.go:276] 0 containers: []
	W0408 10:55:50.614160    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:50.614222    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:50.624748    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:50.624768    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:50.624775    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:50.636202    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:50.636216    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:50.650000    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:50.650010    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:50.684509    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:50.684520    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:50.699356    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:50.699366    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:50.713457    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:50.713473    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:50.727920    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:50.727930    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:50.740457    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:50.740468    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:50.763582    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:50.763593    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:50.775667    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:50.775678    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:50.814174    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:50.814184    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:50.828291    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:50.828301    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:50.846323    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:50.846334    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:50.858031    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:50.858044    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:50.895389    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:50.895403    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:50.906735    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:50.906746    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:50.986765    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:50.986864    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:50.998115    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:50.998184    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:51.008269    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:51.008337    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:51.021646    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:51.021730    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:51.032126    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:51.032213    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:51.042642    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:51.042717    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:51.052757    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:51.052828    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:51.062600    8917 logs.go:276] 0 containers: []
	W0408 10:55:51.062612    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:51.062664    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:51.072906    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:51.072919    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:51.072925    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:51.109674    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:51.109685    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:51.125973    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:51.125984    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:51.138315    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:51.138326    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:51.149417    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:51.149432    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:51.161229    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:51.161239    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:51.172658    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:51.172673    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:51.184176    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:51.184187    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:51.219122    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:51.219131    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:51.236518    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:51.236531    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:55:51.251204    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:51.251215    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:51.268949    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:51.268960    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:51.298457    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:51.298465    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:53.411771    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:53.804560    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:58.414277    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:58.414529    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:58.442497    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:58.442608    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:58.457918    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:58.457999    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:58.469883    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:58.469956    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:58.482772    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:58.482847    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:58.493729    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:58.493798    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:58.505951    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:58.506027    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:58.516139    9084 logs.go:276] 0 containers: []
	W0408 10:55:58.516153    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:58.516213    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:58.526466    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:58.526484    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:58.526490    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:58.530647    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:58.530655    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:58.545013    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:58.545026    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:58.557541    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:58.557554    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:58.580688    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:58.580697    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:58.592366    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:58.592376    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:58.609187    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:58.609198    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:58.645947    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:58.645969    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:58.687980    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:58.687992    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:58.702372    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:58.702384    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:58.739572    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:58.739584    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:58.754046    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:58.754057    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:58.768019    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:58.768029    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:58.779963    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:58.779976    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:58.794307    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:58.794317    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:58.805515    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:58.805525    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:01.320118    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:58.805338    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:58.805427    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:58.816956    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:55:58.817025    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:58.827619    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:55:58.827687    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:58.838366    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:55:58.838434    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:58.849171    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:55:58.849244    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:58.860809    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:55:58.860883    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:58.871273    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:55:58.871332    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:58.880981    8917 logs.go:276] 0 containers: []
	W0408 10:55:58.880993    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:58.881052    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:58.891564    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:55:58.891580    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:58.891586    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:58.929202    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:55:58.929213    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:55:58.947370    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:55:58.947381    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:55:58.959532    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:55:58.959543    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:55:58.971006    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:58.971016    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:58.995754    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:55:58.995762    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:55:59.007409    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:55:59.007420    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:55:59.024772    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:55:59.024782    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:59.036128    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:59.036139    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:59.070831    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:59.070838    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:59.075603    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:55:59.075609    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:55:59.089739    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:55:59.089754    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:55:59.101300    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:55:59.101311    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:01.618078    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:06.322713    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:06.322901    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:06.339065    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:06.339153    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:06.351795    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:06.351869    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:06.364096    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:06.364176    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:06.374934    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:06.375004    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:06.385001    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:06.385069    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:06.395668    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:06.395741    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:06.405519    9084 logs.go:276] 0 containers: []
	W0408 10:56:06.405535    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:06.405592    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:06.415806    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:06.415822    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:06.415827    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:06.454070    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:06.454078    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:06.465899    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:06.465915    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:06.480341    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:06.480357    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:06.491984    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:06.491998    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:06.505256    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:06.505268    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:06.519703    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:06.519715    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:06.533918    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:06.533928    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:06.549229    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:06.549240    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:06.567545    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:06.567561    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:06.582047    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:06.582056    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:06.604176    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:06.604182    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:06.618330    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:06.618341    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:06.630574    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:06.630583    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:06.635112    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:06.635125    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:06.676425    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:06.676443    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:06.618568    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:06.618690    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:06.630548    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:06.630625    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:06.641670    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:06.641743    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:06.653255    8917 logs.go:276] 2 containers: [4c05907bbc81 e0304763bc53]
	I0408 10:56:06.653333    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:06.664865    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:06.664942    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:06.676620    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:06.676691    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:06.688271    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:06.688345    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:06.699223    8917 logs.go:276] 0 containers: []
	W0408 10:56:06.699235    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:06.699294    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:06.710734    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:06.710749    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:06.710754    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:06.733852    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:06.733864    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:06.758839    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:06.758850    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:06.773148    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:06.773160    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:06.807756    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:06.807768    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:06.819970    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:06.819981    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:06.834578    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:06.834588    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:06.854040    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:06.854051    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:06.865845    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:06.865854    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:06.877606    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:06.877615    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:06.910586    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:06.910592    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:06.915235    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:06.915241    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:06.929906    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:06.929916    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:09.218022    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:09.453111    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:14.220269    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:14.220498    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:14.238946    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:14.239068    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:14.252455    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:14.252532    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:14.263573    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:14.263649    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:14.278471    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:14.278543    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:14.289071    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:14.289144    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:14.299734    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:14.299804    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:14.309757    9084 logs.go:276] 0 containers: []
	W0408 10:56:14.309771    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:14.309831    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:14.320148    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:14.320166    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:14.320172    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:14.343013    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:14.343021    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:14.353928    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:14.353943    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:14.367939    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:14.367949    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:14.381808    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:14.381818    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:14.396350    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:14.396360    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:14.413349    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:14.413359    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:14.425934    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:14.425945    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:14.462928    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:14.462943    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:14.479127    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:14.479145    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:14.494947    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:14.494961    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:14.502582    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:14.502597    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:14.542854    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:14.542866    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:14.558773    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:14.558784    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:14.572339    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:14.572351    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:14.612729    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:14.612746    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:17.130895    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:14.455701    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:14.455790    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:14.467943    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:14.468023    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:14.479444    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:14.479519    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:14.491574    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:14.491660    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:14.502920    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:14.502994    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:14.514592    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:14.514668    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:14.528486    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:14.528569    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:14.539576    8917 logs.go:276] 0 containers: []
	W0408 10:56:14.539588    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:14.539652    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:14.550571    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:14.550591    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:14.550597    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:14.587979    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:14.587993    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:14.603560    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:14.603570    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:14.618028    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:14.618040    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:14.644274    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:14.644286    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:14.655835    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:14.655845    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:14.660685    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:14.660691    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:14.673417    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:14.673431    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:14.684864    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:14.684875    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:14.696548    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:14.696559    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:14.709449    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:14.709460    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:14.728229    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:14.728239    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:14.739726    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:14.739738    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:14.780147    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:14.780161    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:14.794779    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:14.794790    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:17.314887    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:22.133585    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:22.133791    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:22.152251    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:22.152352    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:22.166066    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:22.166143    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:22.177445    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:22.177508    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:22.187559    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:22.187628    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:22.198052    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:22.198115    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:22.208998    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:22.209075    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:22.218972    9084 logs.go:276] 0 containers: []
	W0408 10:56:22.218982    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:22.219043    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:22.229361    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:22.229380    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:22.229386    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:22.241230    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:22.241242    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:22.277589    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:22.277598    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:22.298610    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:22.298624    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:22.309757    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:22.309769    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:22.334062    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:22.334079    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:22.354355    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:22.354367    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:22.395627    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:22.395643    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:22.408486    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:22.408499    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:22.424430    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:22.424443    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:22.439239    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:22.439250    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:22.453670    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:22.453680    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:22.465482    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:22.465496    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:22.485035    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:22.485048    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:22.490087    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:22.490100    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:22.530986    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:22.530999    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:22.315649    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:22.315728    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:22.328621    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:22.328692    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:22.342211    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:22.342279    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:22.354556    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:22.354629    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:22.366534    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:22.366611    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:22.378071    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:22.378147    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:22.389245    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:22.389302    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:22.400611    8917 logs.go:276] 0 containers: []
	W0408 10:56:22.400623    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:22.400682    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:22.412068    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:22.412087    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:22.412093    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:22.449697    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:22.449724    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:22.455639    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:22.455652    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:22.470978    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:22.470995    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:22.486853    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:22.486871    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:22.505173    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:22.505185    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:22.518389    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:22.518402    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:22.534910    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:22.534924    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:22.548942    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:22.548955    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:22.584219    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:22.584229    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:22.595679    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:22.595688    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:22.614264    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:22.614277    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:22.631830    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:22.631840    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:22.655372    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:22.655382    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:22.666782    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:22.666791    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:25.045952    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:25.182689    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:30.046541    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:30.046821    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:30.070583    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:30.070686    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:30.086902    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:30.086978    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:30.099337    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:30.099408    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:30.110773    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:30.110843    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:30.120736    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:30.120802    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:30.130949    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:30.131022    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:30.141143    9084 logs.go:276] 0 containers: []
	W0408 10:56:30.141161    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:30.141221    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:30.151623    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:30.151642    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:30.151648    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:30.170403    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:30.170417    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:30.181532    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:30.181545    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:30.194142    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:30.194153    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:30.216513    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:30.216525    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:30.221314    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:30.221326    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:30.260278    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:30.260295    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:30.299665    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:30.299682    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:30.312421    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:30.312433    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:30.327241    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:30.327255    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:30.346556    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:30.346570    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:30.363809    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:30.363819    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:30.376188    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:30.376197    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:30.420888    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:30.420911    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:30.435675    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:30.435689    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:30.453942    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:30.453959    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:30.185134    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:30.185215    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:30.200442    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:30.200518    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:30.216807    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:30.216877    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:30.228121    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:30.228221    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:30.240927    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:30.241004    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:30.252318    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:30.252388    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:30.263730    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:30.263807    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:30.274587    8917 logs.go:276] 0 containers: []
	W0408 10:56:30.274599    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:30.274662    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:30.286191    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:30.286208    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:30.286213    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:30.301284    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:30.301292    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:30.314006    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:30.314016    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:30.331070    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:30.331081    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:30.348628    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:30.348637    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:30.374681    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:30.374696    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:30.387104    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:30.387117    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:30.399991    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:30.400002    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:30.411730    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:30.411741    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:30.424271    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:30.424283    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:30.442301    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:30.442318    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:30.454870    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:30.454881    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:30.491200    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:30.491212    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:30.511816    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:30.511827    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:30.547821    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:30.547829    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:32.980984    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:33.054288    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:37.983303    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:37.983439    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:37.998640    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:37.998723    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:38.011489    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:38.011565    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:38.022287    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:38.022357    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:38.032421    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:38.032488    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:38.043086    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:38.043163    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:38.053552    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:38.053628    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:38.064279    9084 logs.go:276] 0 containers: []
	W0408 10:56:38.064292    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:38.064356    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:38.076352    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:38.076372    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:38.076382    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:38.118602    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:38.118618    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:38.133993    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:38.134005    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:38.148948    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:38.148956    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:38.161418    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:38.161430    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:38.201228    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:38.201238    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:38.217080    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:38.217091    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:38.232267    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:38.232278    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:38.271060    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:38.271073    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:38.289085    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:38.289102    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:38.314299    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:38.314308    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:38.329354    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:38.329367    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:38.341640    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:38.341653    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:38.360150    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:38.360162    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:38.375378    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:38.375394    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:38.388176    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:38.388187    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:40.894052    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:38.056560    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:38.056638    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:38.068254    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:38.068323    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:38.079779    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:38.079874    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:38.090785    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:38.090863    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:38.102803    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:38.102875    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:38.114185    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:38.114256    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:38.125916    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:38.125993    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:38.136968    8917 logs.go:276] 0 containers: []
	W0408 10:56:38.136981    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:38.137040    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:38.148285    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:38.148304    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:38.148309    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:38.184468    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:38.184479    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:38.200426    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:38.200437    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:38.213055    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:38.213067    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:38.232435    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:38.232445    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:38.245123    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:38.245134    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:38.258239    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:38.258252    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:38.299290    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:38.299305    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:38.314014    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:38.314025    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:38.338876    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:38.338889    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:38.351941    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:38.351954    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:38.367186    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:38.367202    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:38.379297    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:38.379313    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:38.403911    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:38.403920    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:38.408637    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:38.408644    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:40.926424    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:45.896685    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:45.897004    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:45.933332    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:45.933463    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:45.951676    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:45.951768    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:45.966192    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:45.966245    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:45.980341    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:45.980407    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:46.011129    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:46.011229    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:46.030985    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:46.031024    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:46.045829    9084 logs.go:276] 0 containers: []
	W0408 10:56:46.045857    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:46.045923    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:46.057984    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:46.058004    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:46.058009    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:46.101244    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:46.101260    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:46.116134    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:46.116149    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:46.129440    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:46.129455    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:46.166849    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:46.166859    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:46.181675    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:46.181689    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:46.200402    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:46.200416    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:46.220122    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:46.220133    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:46.244843    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:46.244869    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:46.283943    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:46.283961    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:46.288488    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:46.288497    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:46.303859    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:46.303869    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:46.323292    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:46.323303    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:46.336944    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:46.336957    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:46.350906    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:46.350916    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:46.364590    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:46.364603    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:45.927337    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:45.927563    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:45.948599    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:45.948695    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:45.964430    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:45.964515    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:45.977402    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:45.977472    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:45.989166    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:45.989241    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:46.001634    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:46.001711    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:46.013388    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:46.013460    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:46.030721    8917 logs.go:276] 0 containers: []
	W0408 10:56:46.030735    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:46.030805    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:46.042739    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:46.042758    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:46.042763    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:46.055566    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:46.055581    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:46.072306    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:46.072321    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:46.077414    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:46.077425    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:46.113526    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:46.113540    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:46.134510    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:46.134523    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:46.147923    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:46.147934    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:46.184495    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:46.184512    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:46.197171    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:46.197183    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:46.210065    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:46.210078    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:46.235241    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:46.235254    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:46.253646    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:46.253658    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:46.272145    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:46.272156    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:46.286674    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:46.286683    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:46.301839    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:46.301852    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:48.878136    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:48.816447    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:53.880470    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:53.880692    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:53.895993    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:53.896070    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:53.909353    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:53.909424    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:53.921651    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:53.921725    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:53.933346    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:53.933415    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:53.944782    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:53.944860    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:53.962984    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:53.963057    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:53.973921    9084 logs.go:276] 0 containers: []
	W0408 10:56:53.973932    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:53.973989    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:53.987288    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:53.987308    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:53.987314    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:54.003279    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:54.003287    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:54.015641    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:54.015653    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:54.034525    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:54.034535    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:54.074567    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:54.074578    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:54.090090    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:54.090109    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:54.102930    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:54.102939    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:54.127186    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:54.127200    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:54.139523    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:54.139538    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:54.152265    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:54.152278    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:54.156990    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:54.156999    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:54.196206    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:54.196219    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:54.211921    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:54.211932    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:54.233614    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:54.233625    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:54.245191    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:54.245203    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:54.284704    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:54.284714    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:56.801045    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:53.818725    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:53.819317    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:53.868286    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:56:53.868419    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:53.888313    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:56:53.888399    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:53.902975    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:56:53.903061    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:53.915375    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:56:53.915454    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:53.931094    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:56:53.931171    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:53.944939    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:56:53.944968    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:53.957364    8917 logs.go:276] 0 containers: []
	W0408 10:56:53.957376    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:53.957441    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:53.971006    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:56:53.971023    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:53.971029    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:53.976237    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:56:53.976251    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:56:53.989921    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:56:53.989933    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:56:54.002980    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:56:54.002996    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:56:54.021872    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:56:54.021890    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:56:54.034185    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:56:54.034196    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:54.048579    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:54.048593    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:54.085585    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:56:54.085601    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:56:54.101624    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:56:54.101639    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:56:54.126819    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:56:54.126830    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:56:54.141369    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:54.141379    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:54.178957    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:56:54.178970    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:56:54.198851    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:56:54.198862    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:56:54.220345    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:54.220356    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:54.246625    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:56:54.246643    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:56:56.764764    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:01.803181    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:01.803261    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:01.815142    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:57:01.815222    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:01.826493    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:57:01.826564    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:01.838224    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:57:01.838300    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:01.849650    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:57:01.849728    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:01.861640    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:57:01.861703    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:01.872488    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:57:01.872565    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:01.883623    9084 logs.go:276] 0 containers: []
	W0408 10:57:01.883635    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:01.883709    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:01.894388    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:57:01.894407    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:57:01.894415    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:57:01.912846    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:01.912857    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:01.937524    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:01.937536    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:01.941989    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:01.941996    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:01.978238    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:57:01.978250    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:57:02.018336    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:02.018353    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:02.057495    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:57:02.057506    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:57:02.072573    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:57:02.072582    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:57:02.085119    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:57:02.085132    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:57:02.100237    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:57:02.100251    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:02.112893    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:57:02.112907    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:57:02.129885    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:57:02.129895    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:57:02.144908    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:57:02.144919    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:57:02.166903    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:57:02.166914    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:57:02.182499    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:57:02.182513    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:57:02.194013    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:57:02.194024    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:57:01.767224    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:01.767459    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:01.785067    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:01.785156    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:01.798588    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:01.798661    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:01.810445    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:01.810520    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:01.821961    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:01.822031    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:01.833142    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:01.833219    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:01.846983    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:01.847061    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:01.860007    8917 logs.go:276] 0 containers: []
	W0408 10:57:01.860021    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:01.860088    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:01.879169    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:01.879187    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:01.879192    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:01.895863    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:01.895871    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:01.910603    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:01.910615    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:01.923593    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:01.923604    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:01.950718    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:01.950740    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:01.964054    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:01.964067    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:01.999731    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:01.999742    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:02.005000    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:02.005011    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:02.041415    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:02.041428    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:02.056869    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:02.056882    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:02.070878    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:02.070889    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:02.090208    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:02.090223    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:02.103057    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:02.103067    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:02.116370    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:02.116381    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:02.128933    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:02.128945    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:04.706982    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:04.651124    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:09.709127    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:09.709200    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:09.721366    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:57:09.721441    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:09.732827    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:57:09.732888    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:09.743736    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:57:09.743800    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:09.755336    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:57:09.755412    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:09.771318    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:57:09.771390    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:09.782823    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:57:09.782899    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:09.793637    9084 logs.go:276] 0 containers: []
	W0408 10:57:09.793651    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:09.793714    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:09.805234    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:57:09.805249    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:57:09.805254    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:57:09.818574    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:09.818596    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:09.857319    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:57:09.857336    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:57:09.896107    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:57:09.896118    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:57:09.912136    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:57:09.912144    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:57:09.927563    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:57:09.927575    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:57:09.940076    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:57:09.940089    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:09.952570    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:09.952583    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:09.999392    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:57:09.999402    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:57:10.014816    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:57:10.014834    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:57:10.029961    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:57:10.029973    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:57:10.042560    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:57:10.042573    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:57:10.064282    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:57:10.064296    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:57:10.075829    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:10.075843    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:10.097502    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:10.097511    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:10.101669    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:57:10.101675    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:57:09.653332    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:09.653466    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:09.671088    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:09.671169    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:09.682982    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:09.683050    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:09.693406    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:09.693484    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:09.703707    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:09.703774    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:09.715245    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:09.715315    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:09.730382    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:09.730456    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:09.741189    8917 logs.go:276] 0 containers: []
	W0408 10:57:09.741201    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:09.741261    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:09.752416    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:09.752436    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:09.752441    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:09.765546    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:09.765557    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:09.782224    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:09.782242    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:09.818995    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:09.819010    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:09.859463    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:09.859474    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:09.872252    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:09.872263    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:09.890712    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:09.890726    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:09.896347    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:09.896354    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:09.910966    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:09.910977    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:09.928641    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:09.928649    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:09.953731    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:09.953740    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:09.966594    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:09.966605    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:09.979372    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:09.979388    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:10.005271    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:10.005286    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:10.020362    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:10.020374    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:12.538055    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:12.620802    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:17.540510    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:17.540735    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:17.559953    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:17.560069    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:17.574281    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:17.574358    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:17.585883    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:17.585954    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:17.596459    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:17.596533    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:17.611203    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:17.611268    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:17.621981    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:17.622055    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:17.633058    8917 logs.go:276] 0 containers: []
	W0408 10:57:17.633070    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:17.633132    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:17.645147    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:17.645171    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:17.645178    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:17.650207    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:17.650219    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:17.665823    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:17.665839    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:17.703096    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:17.703113    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:17.722324    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:17.722334    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:17.735383    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:17.735396    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:17.751555    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:17.751573    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:17.764121    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:17.764133    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:17.776539    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:17.776550    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:17.789070    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:17.789083    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:17.826799    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:17.826812    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:17.839058    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:17.839073    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:17.854846    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:17.854858    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:17.867195    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:17.867206    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:17.885453    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:17.885463    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:17.622339    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:17.622383    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:17.635147    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:57:17.635196    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:17.646354    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:57:17.646420    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:17.659347    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:57:17.659421    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:17.670493    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:57:17.670569    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:17.681578    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:57:17.681649    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:17.698580    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:57:17.698655    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:17.709351    9084 logs.go:276] 0 containers: []
	W0408 10:57:17.709362    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:17.709420    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:17.720736    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:57:17.720754    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:17.720760    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:17.725365    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:57:17.725375    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:57:17.740762    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:57:17.740773    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:57:17.753424    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:17.753435    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:17.776656    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:17.776665    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:17.813764    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:57:17.813777    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:17.828699    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:57:17.828709    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:57:17.865258    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:57:17.865276    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:57:17.912061    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:57:17.912075    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:57:17.924472    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:57:17.924484    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:57:17.938016    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:17.938026    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:17.974861    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:57:17.974869    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:57:17.988309    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:57:17.988319    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:57:18.002652    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:57:18.002664    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:57:18.014171    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:57:18.014185    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:57:18.025845    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:57:18.025855    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:57:20.545926    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:20.414262    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:25.548333    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:25.548363    9084 kubeadm.go:591] duration metric: took 4m3.905114541s to restartPrimaryControlPlane
	W0408 10:57:25.548404    9084 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 10:57:25.548417    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 10:57:26.580495    9084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.032062125s)
	I0408 10:57:26.580575    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 10:57:26.585606    9084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:57:26.588389    9084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:57:26.591016    9084 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 10:57:26.591022    9084 kubeadm.go:156] found existing configuration files:
	
	I0408 10:57:26.591047    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0408 10:57:26.593345    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 10:57:26.593368    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:57:26.596312    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0408 10:57:26.599171    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 10:57:26.599191    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:57:26.601645    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0408 10:57:26.604662    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 10:57:26.604684    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:57:26.607817    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0408 10:57:26.610495    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 10:57:26.610517    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:57:26.613212    9084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 10:57:26.629729    9084 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 10:57:26.629765    9084 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 10:57:26.679877    9084 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 10:57:26.679933    9084 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 10:57:26.679992    9084 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 10:57:26.728518    9084 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 10:57:26.732731    9084 out.go:204]   - Generating certificates and keys ...
	I0408 10:57:26.732864    9084 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 10:57:26.732965    9084 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 10:57:26.733055    9084 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 10:57:26.733090    9084 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 10:57:26.733126    9084 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 10:57:26.733151    9084 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 10:57:26.733184    9084 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 10:57:26.733285    9084 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 10:57:26.733383    9084 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 10:57:26.733468    9084 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 10:57:26.733489    9084 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 10:57:26.733519    9084 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 10:57:26.805521    9084 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 10:57:26.948630    9084 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 10:57:27.196024    9084 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 10:57:27.263798    9084 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 10:57:27.292234    9084 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 10:57:27.292678    9084 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 10:57:27.292699    9084 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 10:57:27.386598    9084 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 10:57:27.390942    9084 out.go:204]   - Booting up control plane ...
	I0408 10:57:27.390988    9084 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 10:57:27.391024    9084 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 10:57:27.391057    9084 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 10:57:27.391119    9084 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 10:57:27.391208    9084 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 10:57:25.416075    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:25.416422    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:25.452383    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:25.452523    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:25.470972    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:25.471062    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:25.484859    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:25.484941    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:25.496180    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:25.496262    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:25.509984    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:25.510055    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:25.520224    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:25.520294    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:25.531598    8917 logs.go:276] 0 containers: []
	W0408 10:57:25.531613    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:25.531667    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:25.543560    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:25.543578    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:25.543584    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:25.581058    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:25.581074    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:25.618793    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:25.618804    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:25.631651    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:25.631665    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:25.649839    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:25.649853    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:25.662058    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:25.662072    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:25.685394    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:25.685408    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:25.689836    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:25.689842    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:25.704436    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:25.704450    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:25.717033    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:25.717054    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:25.733195    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:25.733214    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:25.746146    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:25.746158    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:25.759202    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:25.759218    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:25.771991    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:25.772005    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:25.786371    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:25.786384    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:31.891771    9084 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501223 seconds
	I0408 10:57:31.891834    9084 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 10:57:31.895564    9084 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 10:57:32.412543    9084 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 10:57:32.412834    9084 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-476000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 10:57:32.916321    9084 kubeadm.go:309] [bootstrap-token] Using token: 9bum99.wbtrb7jvnhsflftl
	I0408 10:57:28.303371    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:32.922648    9084 out.go:204]   - Configuring RBAC rules ...
	I0408 10:57:32.922701    9084 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 10:57:32.922761    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 10:57:32.926348    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 10:57:32.927084    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 10:57:32.928025    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 10:57:32.928940    9084 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 10:57:32.932376    9084 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 10:57:33.108672    9084 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 10:57:33.321122    9084 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 10:57:33.321547    9084 kubeadm.go:309] 
	I0408 10:57:33.321639    9084 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 10:57:33.321653    9084 kubeadm.go:309] 
	I0408 10:57:33.321764    9084 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 10:57:33.321774    9084 kubeadm.go:309] 
	I0408 10:57:33.321827    9084 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 10:57:33.321867    9084 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 10:57:33.321895    9084 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 10:57:33.321901    9084 kubeadm.go:309] 
	I0408 10:57:33.321934    9084 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 10:57:33.321937    9084 kubeadm.go:309] 
	I0408 10:57:33.321968    9084 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 10:57:33.321970    9084 kubeadm.go:309] 
	I0408 10:57:33.322067    9084 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 10:57:33.322169    9084 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 10:57:33.322326    9084 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 10:57:33.322333    9084 kubeadm.go:309] 
	I0408 10:57:33.322381    9084 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 10:57:33.322424    9084 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 10:57:33.322428    9084 kubeadm.go:309] 
	I0408 10:57:33.322466    9084 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9bum99.wbtrb7jvnhsflftl \
	I0408 10:57:33.322546    9084 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 \
	I0408 10:57:33.322609    9084 kubeadm.go:309] 	--control-plane 
	I0408 10:57:33.322652    9084 kubeadm.go:309] 
	I0408 10:57:33.322700    9084 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 10:57:33.322703    9084 kubeadm.go:309] 
	I0408 10:57:33.322775    9084 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9bum99.wbtrb7jvnhsflftl \
	I0408 10:57:33.322854    9084 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 
	I0408 10:57:33.322950    9084 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 10:57:33.322963    9084 cni.go:84] Creating CNI manager for ""
	I0408 10:57:33.322978    9084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:57:33.326734    9084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 10:57:33.334690    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 10:57:33.337900    9084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 10:57:33.343261    9084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 10:57:33.343347    9084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 10:57:33.343365    9084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-476000 minikube.k8s.io/updated_at=2024_04_08T10_57_33_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=stopped-upgrade-476000 minikube.k8s.io/primary=true
	I0408 10:57:33.347015    9084 ops.go:34] apiserver oom_adj: -16
	I0408 10:57:33.398173    9084 kubeadm.go:1107] duration metric: took 54.904541ms to wait for elevateKubeSystemPrivileges
	W0408 10:57:33.398203    9084 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 10:57:33.398207    9084 kubeadm.go:393] duration metric: took 4m11.768473416s to StartCluster
	I0408 10:57:33.398217    9084 settings.go:142] acquiring lock: {Name:mk6ed0f877152c89dfeb4cfbed60423b324ecbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:57:33.398307    9084 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:57:33.398730    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:57:33.398988    9084 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:57:33.405678    9084 out.go:177] * Verifying Kubernetes components...
	I0408 10:57:33.399006    9084 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 10:57:33.399281    9084 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:57:33.413777    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:57:33.413790    9084 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-476000"
	I0408 10:57:33.413792    9084 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-476000"
	I0408 10:57:33.413832    9084 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-476000"
	W0408 10:57:33.413838    9084 addons.go:243] addon storage-provisioner should already be in state true
	I0408 10:57:33.413858    9084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-476000"
	I0408 10:57:33.413869    9084 host.go:66] Checking if "stopped-upgrade-476000" exists ...
	I0408 10:57:33.414337    9084 retry.go:31] will retry after 1.139752945s: connect: dial unix /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/monitor: connect: connection refused
	I0408 10:57:33.415557    9084 kapi.go:59] client config for stopped-upgrade-476000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042e3a70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:57:33.415684    9084 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-476000"
	W0408 10:57:33.415690    9084 addons.go:243] addon default-storageclass should already be in state true
	I0408 10:57:33.415701    9084 host.go:66] Checking if "stopped-upgrade-476000" exists ...
	I0408 10:57:33.416661    9084 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 10:57:33.416667    9084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 10:57:33.416672    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:57:33.505911    9084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:57:33.512168    9084 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:57:33.512222    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:57:33.516643    9084 api_server.go:72] duration metric: took 117.63975ms to wait for apiserver process to appear ...
	I0408 10:57:33.516653    9084 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:57:33.516662    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:33.554808    9084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 10:57:34.560968    9084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:57:34.564240    9084 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:57:34.564251    9084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 10:57:34.564263    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:57:34.599495    9084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:57:33.305597    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:33.305702    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:33.316300    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:33.316371    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:33.332484    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:33.332555    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:33.361500    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:33.361584    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:33.381783    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:33.381861    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:33.393510    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:33.393592    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:33.405021    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:33.405102    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:33.415729    8917 logs.go:276] 0 containers: []
	W0408 10:57:33.415738    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:33.415781    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:33.427433    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:33.427451    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:33.427463    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:33.440429    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:33.440442    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:33.456115    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:33.456128    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:33.468669    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:33.468684    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:33.483801    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:33.483811    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:33.496548    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:33.496559    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:33.509797    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:33.509813    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:33.522589    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:33.522604    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:33.537389    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:33.537399    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:33.541877    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:33.541884    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:33.579860    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:33.579871    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:33.595818    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:33.595831    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:33.609104    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:33.609116    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:33.627993    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:33.628010    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:33.666859    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:33.666874    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:36.194075    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:38.518836    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:38.518865    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:41.196596    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:41.196952    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:41.238053    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:41.238166    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:41.256605    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:41.256702    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:41.271458    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:41.271533    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:41.282834    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:41.282913    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:41.293333    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:41.293394    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:41.303740    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:41.303808    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:41.314311    8917 logs.go:276] 0 containers: []
	W0408 10:57:41.314321    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:41.314373    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:41.325006    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:41.325022    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:41.325027    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:41.357998    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:41.358006    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:41.362262    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:41.362270    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:41.377954    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:41.377963    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:41.397169    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:41.397180    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:41.411981    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:41.411992    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:41.426996    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:41.427008    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:41.438863    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:41.438877    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:41.451999    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:41.452012    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:41.487408    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:41.487421    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:41.500074    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:41.500086    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:41.524016    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:41.524030    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:41.547708    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:41.547718    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:41.562394    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:41.562406    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:41.573832    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:41.573844    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:43.519371    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:43.519403    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:44.087626    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:48.519837    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:48.519885    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:49.089891    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:49.090100    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:49.112252    8917 logs.go:276] 1 containers: [d1ba90cef09b]
	I0408 10:57:49.112353    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:49.127825    8917 logs.go:276] 1 containers: [b7f1267d9e43]
	I0408 10:57:49.127911    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:49.141368    8917 logs.go:276] 4 containers: [b9fb070ee194 538d20d19fe2 4c05907bbc81 e0304763bc53]
	I0408 10:57:49.141448    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:49.152200    8917 logs.go:276] 1 containers: [34b4726b2637]
	I0408 10:57:49.152270    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:49.162545    8917 logs.go:276] 1 containers: [01d6b1fb69ca]
	I0408 10:57:49.162620    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:49.173335    8917 logs.go:276] 1 containers: [20e2e023314e]
	I0408 10:57:49.173407    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:49.183731    8917 logs.go:276] 0 containers: []
	W0408 10:57:49.183745    8917 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:49.183799    8917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:49.194966    8917 logs.go:276] 1 containers: [d175f0beb5d4]
	I0408 10:57:49.194986    8917 logs.go:123] Gathering logs for kube-controller-manager [20e2e023314e] ...
	I0408 10:57:49.194991    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20e2e023314e"
	I0408 10:57:49.213421    8917 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:49.213433    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:49.248172    8917 logs.go:123] Gathering logs for coredns [b9fb070ee194] ...
	I0408 10:57:49.248184    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9fb070ee194"
	I0408 10:57:49.259940    8917 logs.go:123] Gathering logs for coredns [538d20d19fe2] ...
	I0408 10:57:49.259951    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 538d20d19fe2"
	I0408 10:57:49.271211    8917 logs.go:123] Gathering logs for coredns [4c05907bbc81] ...
	I0408 10:57:49.271223    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c05907bbc81"
	I0408 10:57:49.287715    8917 logs.go:123] Gathering logs for coredns [e0304763bc53] ...
	I0408 10:57:49.287729    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0304763bc53"
	I0408 10:57:49.300715    8917 logs.go:123] Gathering logs for kube-scheduler [34b4726b2637] ...
	I0408 10:57:49.300729    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34b4726b2637"
	I0408 10:57:49.321771    8917 logs.go:123] Gathering logs for kube-proxy [01d6b1fb69ca] ...
	I0408 10:57:49.321783    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01d6b1fb69ca"
	I0408 10:57:49.333795    8917 logs.go:123] Gathering logs for storage-provisioner [d175f0beb5d4] ...
	I0408 10:57:49.333805    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d175f0beb5d4"
	I0408 10:57:49.346083    8917 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:49.346092    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:49.368800    8917 logs.go:123] Gathering logs for container status ...
	I0408 10:57:49.368808    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:49.380040    8917 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:49.380053    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:49.384629    8917 logs.go:123] Gathering logs for etcd [b7f1267d9e43] ...
	I0408 10:57:49.384638    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f1267d9e43"
	I0408 10:57:49.398866    8917 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:49.398880    8917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:49.439322    8917 logs.go:123] Gathering logs for kube-apiserver [d1ba90cef09b] ...
	I0408 10:57:49.439332    8917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1ba90cef09b"
	I0408 10:57:51.956375    8917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:56.958755    8917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:56.963367    8917 out.go:177] 
	W0408 10:57:56.966340    8917 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0408 10:57:56.966353    8917 out.go:239] * 
	W0408 10:57:56.967184    8917 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:57:56.977278    8917 out.go:177] 
	I0408 10:57:53.520417    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:53.520441    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:58.521086    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:58.521112    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:03.521905    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:03.521932    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 10:58:03.895863    9084 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 10:58:03.904271    9084 out.go:177] * Enabled addons: storage-provisioner
	I0408 10:58:03.913210    9084 addons.go:505] duration metric: took 30.514006417s for enable addons: enabled=[storage-provisioner]
	I0408 10:58:08.523025    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:08.523120    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-04-08 17:49:06 UTC, ends at Mon 2024-04-08 17:58:13 UTC. --
	Apr 08 17:57:58 running-upgrade-603000 dockerd[3338]: time="2024-04-08T17:57:58.109717023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 17:57:58 running-upgrade-603000 dockerd[3338]: time="2024-04-08T17:57:58.109765226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 17:57:58 running-upgrade-603000 dockerd[3338]: time="2024-04-08T17:57:58.109771226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 17:57:58 running-upgrade-603000 dockerd[3338]: time="2024-04-08T17:57:58.109934461Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9c432efe7b4d00d188b81910c2bb7cefb90b36ae0a3842f0d9004387a4cb26a3 pid=19050 runtime=io.containerd.runc.v2
	Apr 08 17:57:58 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:58Z" level=error msg="ContainerStats resp: {0x4000778b80 linux}"
	Apr 08 17:57:58 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x4000645c40 linux}"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x4000645f80 linux}"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x400098e0c0 linux}"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x400098e500 linux}"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x4000a40040 linux}"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x4000a40180 linux}"
	Apr 08 17:57:59 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:57:59Z" level=error msg="ContainerStats resp: {0x400098f2c0 linux}"
	Apr 08 17:58:03 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 08 17:58:08 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 08 17:58:09 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:09Z" level=error msg="ContainerStats resp: {0x4000a0b600 linux}"
	Apr 08 17:58:09 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:09Z" level=error msg="ContainerStats resp: {0x4000774dc0 linux}"
	Apr 08 17:58:10 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:10Z" level=error msg="ContainerStats resp: {0x40008b9380 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x4000778340 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x4000775840 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x4000775d80 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x40006440c0 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x4000644a00 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x4000644f80 linux}"
	Apr 08 17:58:11 running-upgrade-603000 cri-dockerd[3181]: time="2024-04-08T17:58:11Z" level=error msg="ContainerStats resp: {0x4000779ec0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9c432efe7b4d0       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   80a15ec7ef8b7
	5e68392e41bbb       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   06c930554e44d
	b9fb070ee194f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   80a15ec7ef8b7
	538d20d19fe2b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   06c930554e44d
	01d6b1fb69ca2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   d6bd9bef2881b
	d175f0beb5d41       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   484093c1ccaa9
	20e2e023314ea       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   bea1ff1d6aa24
	b7f1267d9e432       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   ea87ec66fac30
	d1ba90cef09b0       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   849c86de3a411
	34b4726b26376       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   31596dac83f96
	
	
	==> coredns [538d20d19fe2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:46472->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:44939->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:50836->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:55670->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:54337->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:43946->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:51922->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:44637->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:35368->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 897311469775907960.2064160821633766976. HINFO: read udp 10.244.0.3:50744->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5e68392e41bb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4416459690760213907.245859557091089457. HINFO: read udp 10.244.0.3:58814->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4416459690760213907.245859557091089457. HINFO: read udp 10.244.0.3:56226->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4416459690760213907.245859557091089457. HINFO: read udp 10.244.0.3:37572->10.0.2.3:53: i/o timeout
	
	
	==> coredns [9c432efe7b4d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6969035761063692912.730944192104450668. HINFO: read udp 10.244.0.2:35986->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6969035761063692912.730944192104450668. HINFO: read udp 10.244.0.2:38629->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b9fb070ee194] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4359835835151709174.6517315803458523835. HINFO: read udp 10.244.0.2:37532->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4359835835151709174.6517315803458523835. HINFO: read udp 10.244.0.2:52589->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4359835835151709174.6517315803458523835. HINFO: read udp 10.244.0.2:40753->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4359835835151709174.6517315803458523835. HINFO: read udp 10.244.0.2:45514->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-603000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-603000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021
	                    minikube.k8s.io/name=running-upgrade-603000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T10_53_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 17:53:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-603000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 17:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 17:53:55 +0000   Mon, 08 Apr 2024 17:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 17:53:55 +0000   Mon, 08 Apr 2024 17:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 17:53:55 +0000   Mon, 08 Apr 2024 17:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 17:53:55 +0000   Mon, 08 Apr 2024 17:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-603000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 f17be952e77d4703b6769661e4c5ce31
	  System UUID:                f17be952e77d4703b6769661e4c5ce31
	  Boot ID:                    3fd4bdd6-b207-4a84-98e1-b9ace94615fa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5ctf9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-bbfml                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-603000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-603000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-603000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-8q6mf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-603000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-603000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-603000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-603000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-603000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-603000 event: Registered Node running-upgrade-603000 in Controller
	
	
	==> dmesg <==
	[  +1.988758] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.083099] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.081697] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.136889] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091642] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.081354] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.906090] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +8.639147] systemd-fstab-generator[1923]: Ignoring "noauto" for root device
	[  +2.580447] systemd-fstab-generator[2204]: Ignoring "noauto" for root device
	[  +0.153204] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.092786] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +0.095410] systemd-fstab-generator[2262]: Ignoring "noauto" for root device
	[  +3.312031] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.200329] systemd-fstab-generator[3136]: Ignoring "noauto" for root device
	[  +0.080795] systemd-fstab-generator[3149]: Ignoring "noauto" for root device
	[  +0.081455] systemd-fstab-generator[3160]: Ignoring "noauto" for root device
	[  +0.098911] systemd-fstab-generator[3174]: Ignoring "noauto" for root device
	[  +1.975183] systemd-fstab-generator[3325]: Ignoring "noauto" for root device
	[  +3.946485] systemd-fstab-generator[3756]: Ignoring "noauto" for root device
	[  +1.000830] systemd-fstab-generator[3881]: Ignoring "noauto" for root device
	[Apr 8 17:50] kauditd_printk_skb: 68 callbacks suppressed
	[Apr 8 17:53] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.288428] systemd-fstab-generator[12083]: Ignoring "noauto" for root device
	[  +5.632434] systemd-fstab-generator[12688]: Ignoring "noauto" for root device
	[  +0.478143] systemd-fstab-generator[12819]: Ignoring "noauto" for root device
	
	
	==> etcd [b7f1267d9e43] <==
	{"level":"info","ts":"2024-04-08T17:53:51.312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-04-08T17:53:51.312Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-04-08T17:53:51.327Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T17:53:51.327Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T17:53:51.327Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T17:53:51.327Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-08T17:53:51.327Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-08T17:53:52.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T17:53:52.206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-603000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T17:53:52.207Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T17:53:52.208Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 17:58:13 up 9 min,  0 users,  load average: 0.34, 0.39, 0.22
	Linux running-upgrade-603000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d1ba90cef09b] <==
	I0408 17:53:53.470211       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 17:53:53.470282       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0408 17:53:53.470310       1 cache.go:39] Caches are synced for autoregister controller
	I0408 17:53:53.470389       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0408 17:53:53.479715       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0408 17:53:53.500245       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 17:53:53.506293       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0408 17:53:54.201803       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0408 17:53:54.379695       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 17:53:54.386650       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 17:53:54.386793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 17:53:54.548969       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 17:53:54.559676       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 17:53:54.639681       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0408 17:53:54.641865       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0408 17:53:54.642172       1 controller.go:611] quota admission added evaluator for: endpoints
	I0408 17:53:54.643566       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 17:53:55.535479       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0408 17:53:55.809617       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0408 17:53:55.812819       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0408 17:53:55.817491       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0408 17:53:55.860329       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 17:54:09.669481       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0408 17:54:09.768584       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0408 17:54:10.299581       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [20e2e023314e] <==
	I0408 17:54:08.868356       1 shared_informer.go:262] Caches are synced for PVC protection
	I0408 17:54:08.868227       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0408 17:54:08.868232       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0408 17:54:08.868236       1 shared_informer.go:262] Caches are synced for deployment
	I0408 17:54:08.868240       1 shared_informer.go:262] Caches are synced for expand
	I0408 17:54:08.868258       1 shared_informer.go:262] Caches are synced for endpoint
	I0408 17:54:08.870822       1 shared_informer.go:262] Caches are synced for namespace
	I0408 17:54:08.870858       1 shared_informer.go:262] Caches are synced for stateful set
	I0408 17:54:08.874116       1 shared_informer.go:262] Caches are synced for HPA
	I0408 17:54:08.912638       1 shared_informer.go:262] Caches are synced for taint
	I0408 17:54:08.912690       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0408 17:54:08.912713       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-603000. Assuming now as a timestamp.
	I0408 17:54:08.912731       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0408 17:54:08.912793       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0408 17:54:08.912874       1 event.go:294] "Event occurred" object="running-upgrade-603000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-603000 event: Registered Node running-upgrade-603000 in Controller"
	I0408 17:54:08.966827       1 shared_informer.go:262] Caches are synced for daemon sets
	I0408 17:54:09.054626       1 shared_informer.go:262] Caches are synced for resource quota
	I0408 17:54:09.074149       1 shared_informer.go:262] Caches are synced for resource quota
	I0408 17:54:09.490420       1 shared_informer.go:262] Caches are synced for garbage collector
	I0408 17:54:09.497711       1 shared_informer.go:262] Caches are synced for garbage collector
	I0408 17:54:09.497744       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0408 17:54:09.671350       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0408 17:54:09.774126       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8q6mf"
	I0408 17:54:09.870337       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5ctf9"
	I0408 17:54:09.875111       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bbfml"
	
	
	==> kube-proxy [01d6b1fb69ca] <==
	I0408 17:54:10.271841       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0408 17:54:10.271870       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0408 17:54:10.271883       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0408 17:54:10.296909       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0408 17:54:10.296919       1 server_others.go:206] "Using iptables Proxier"
	I0408 17:54:10.297080       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0408 17:54:10.297206       1 server.go:661] "Version info" version="v1.24.1"
	I0408 17:54:10.297215       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 17:54:10.297542       1 config.go:317] "Starting service config controller"
	I0408 17:54:10.297553       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0408 17:54:10.297561       1 config.go:226] "Starting endpoint slice config controller"
	I0408 17:54:10.297564       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0408 17:54:10.297823       1 config.go:444] "Starting node config controller"
	I0408 17:54:10.297825       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0408 17:54:10.397611       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0408 17:54:10.397639       1 shared_informer.go:262] Caches are synced for service config
	I0408 17:54:10.397882       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [34b4726b2637] <==
	W0408 17:53:53.431554       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 17:53:53.431557       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 17:53:53.431568       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 17:53:53.431571       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 17:53:53.431582       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 17:53:53.431585       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 17:53:53.431603       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 17:53:53.431606       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 17:53:53.431620       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 17:53:53.431623       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 17:53:53.431638       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 17:53:53.431645       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 17:53:53.431680       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 17:53:53.431715       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 17:53:54.258438       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 17:53:54.258499       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 17:53:54.283588       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 17:53:54.283656       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 17:53:54.336012       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 17:53:54.336336       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0408 17:53:54.401864       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 17:53:54.402185       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 17:53:54.457195       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 17:53:54.457208       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0408 17:53:54.928104       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-04-08 17:49:06 UTC, ends at Mon 2024-04-08 17:58:13 UTC. --
	Apr 08 17:53:57 running-upgrade-603000 kubelet[12694]: E0408 17:53:57.445493   12694 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-603000\" already exists" pod="kube-system/etcd-running-upgrade-603000"
	Apr 08 17:54:08 running-upgrade-603000 kubelet[12694]: I0408 17:54:08.868831   12694 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 08 17:54:08 running-upgrade-603000 kubelet[12694]: I0408 17:54:08.869796   12694 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 08 17:54:08 running-upgrade-603000 kubelet[12694]: I0408 17:54:08.918688   12694 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 17:54:08 running-upgrade-603000 kubelet[12694]: I0408 17:54:08.968870   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f7310532-042f-4efb-8437-d73a904b641d-tmp\") pod \"storage-provisioner\" (UID: \"f7310532-042f-4efb-8437-d73a904b641d\") " pod="kube-system/storage-provisioner"
	Apr 08 17:54:08 running-upgrade-603000 kubelet[12694]: I0408 17:54:08.968896   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8v5f\" (UniqueName: \"kubernetes.io/projected/f7310532-042f-4efb-8437-d73a904b641d-kube-api-access-v8v5f\") pod \"storage-provisioner\" (UID: \"f7310532-042f-4efb-8437-d73a904b641d\") " pod="kube-system/storage-provisioner"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: E0408 17:54:09.073728   12694 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: E0408 17:54:09.073786   12694 projected.go:192] Error preparing data for projected volume kube-api-access-v8v5f for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: E0408 17:54:09.073826   12694 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f7310532-042f-4efb-8437-d73a904b641d-kube-api-access-v8v5f podName:f7310532-042f-4efb-8437-d73a904b641d nodeName:}" failed. No retries permitted until 2024-04-08 17:54:09.573812069 +0000 UTC m=+13.776122801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v8v5f" (UniqueName: "kubernetes.io/projected/f7310532-042f-4efb-8437-d73a904b641d-kube-api-access-v8v5f") pod "storage-provisioner" (UID: "f7310532-042f-4efb-8437-d73a904b641d") : configmap "kube-root-ca.crt" not found
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.776320   12694 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.877395   12694 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.878157   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d29f45e-b6c5-4203-b697-b43a905f1d1c-lib-modules\") pod \"kube-proxy-8q6mf\" (UID: \"7d29f45e-b6c5-4203-b697-b43a905f1d1c\") " pod="kube-system/kube-proxy-8q6mf"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.878173   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4rpn\" (UniqueName: \"kubernetes.io/projected/7d29f45e-b6c5-4203-b697-b43a905f1d1c-kube-api-access-c4rpn\") pod \"kube-proxy-8q6mf\" (UID: \"7d29f45e-b6c5-4203-b697-b43a905f1d1c\") " pod="kube-system/kube-proxy-8q6mf"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.878183   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d29f45e-b6c5-4203-b697-b43a905f1d1c-kube-proxy\") pod \"kube-proxy-8q6mf\" (UID: \"7d29f45e-b6c5-4203-b697-b43a905f1d1c\") " pod="kube-system/kube-proxy-8q6mf"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.878195   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d29f45e-b6c5-4203-b697-b43a905f1d1c-xtables-lock\") pod \"kube-proxy-8q6mf\" (UID: \"7d29f45e-b6c5-4203-b697-b43a905f1d1c\") " pod="kube-system/kube-proxy-8q6mf"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.882351   12694 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: E0408 17:54:09.975704   12694 remote_runtime.go:578] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: d175f0beb5d41ad887d9d78618690ac5338a6c7650082d7c072ed3e417c295bd" containerID="d175f0beb5d41ad887d9d78618690ac5338a6c7650082d7c072ed3e417c295bd"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: E0408 17:54:09.975727   12694 kuberuntime_manager.go:1069] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: d175f0beb5d41ad887d9d78618690ac5338a6c7650082d7c072ed3e417c295bd" pod="kube-system/storage-provisioner"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: E0408 17:54:09.975734   12694 generic.go:415] "PLEG: Write status" err="rpc error: code = Unknown desc = Error: No such container: d175f0beb5d41ad887d9d78618690ac5338a6c7650082d7c072ed3e417c295bd" pod="kube-system/storage-provisioner"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.978259   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nn4g\" (UniqueName: \"kubernetes.io/projected/8c9dc19a-df7a-4e38-b2c6-1a9fd652a912-kube-api-access-2nn4g\") pod \"coredns-6d4b75cb6d-5ctf9\" (UID: \"8c9dc19a-df7a-4e38-b2c6-1a9fd652a912\") " pod="kube-system/coredns-6d4b75cb6d-5ctf9"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.978273   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02ac5043-4421-4c3c-80ad-5479bf65e74b-config-volume\") pod \"coredns-6d4b75cb6d-bbfml\" (UID: \"02ac5043-4421-4c3c-80ad-5479bf65e74b\") " pod="kube-system/coredns-6d4b75cb6d-bbfml"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.978283   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c9dc19a-df7a-4e38-b2c6-1a9fd652a912-config-volume\") pod \"coredns-6d4b75cb6d-5ctf9\" (UID: \"8c9dc19a-df7a-4e38-b2c6-1a9fd652a912\") " pod="kube-system/coredns-6d4b75cb6d-5ctf9"
	Apr 08 17:54:09 running-upgrade-603000 kubelet[12694]: I0408 17:54:09.978293   12694 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmh6\" (UniqueName: \"kubernetes.io/projected/02ac5043-4421-4c3c-80ad-5479bf65e74b-kube-api-access-2hmh6\") pod \"coredns-6d4b75cb6d-bbfml\" (UID: \"02ac5043-4421-4c3c-80ad-5479bf65e74b\") " pod="kube-system/coredns-6d4b75cb6d-bbfml"
	Apr 08 17:57:58 running-upgrade-603000 kubelet[12694]: I0408 17:57:58.542491   12694 scope.go:110] "RemoveContainer" containerID="4c05907bbc81283dcd810364c0138effd19128157ec721700f202a6f6b428329"
	Apr 08 17:57:58 running-upgrade-603000 kubelet[12694]: I0408 17:57:58.563156   12694 scope.go:110] "RemoveContainer" containerID="e0304763bc53b595e54ad1110be040e9bd1ec100e30488eebc29020af6bc232f"
	
	
	==> storage-provisioner [d175f0beb5d4] <==
	I0408 17:54:10.022408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 17:54:10.026936       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 17:54:10.026978       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 17:54:10.031684       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 17:54:10.031740       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-603000_7560290d-e48e-4ecf-a40a-dac963517bff!
	I0408 17:54:10.032385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59f1af1e-f67e-4772-8643-32415ab88048", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-603000_7560290d-e48e-4ecf-a40a-dac963517bff became leader
	I0408 17:54:10.132724       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-603000_7560290d-e48e-4ecf-a40a-dac963517bff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-603000 -n running-upgrade-603000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-603000 -n running-upgrade-603000: exit status 2 (15.639634666s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-603000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-603000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-603000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-603000: (2.300976459s)
--- FAIL: TestRunningBinaryUpgrade (588.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-633000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-633000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.148027333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-633000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-633000" primary control-plane node in "kubernetes-upgrade-633000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-633000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:51:42.543307    9014 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:51:42.543434    9014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:51:42.543437    9014 out.go:304] Setting ErrFile to fd 2...
	I0408 10:51:42.543439    9014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:51:42.543574    9014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:51:42.544617    9014 out.go:298] Setting JSON to false
	I0408 10:51:42.561109    9014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6672,"bootTime":1712592030,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:51:42.561169    9014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:51:42.567703    9014 out.go:177] * [kubernetes-upgrade-633000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:51:42.576503    9014 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:51:42.576564    9014 notify.go:220] Checking for updates...
	I0408 10:51:42.581472    9014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:51:42.584484    9014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:51:42.587507    9014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:51:42.590445    9014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:51:42.593493    9014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:51:42.596793    9014 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:51:42.596863    9014 config.go:182] Loaded profile config "running-upgrade-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:51:42.596906    9014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:51:42.601461    9014 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:51:42.608512    9014 start.go:297] selected driver: qemu2
	I0408 10:51:42.608522    9014 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:51:42.608530    9014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:51:42.610952    9014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:51:42.613424    9014 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:51:42.616570    9014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:51:42.616606    9014 cni.go:84] Creating CNI manager for ""
	I0408 10:51:42.616613    9014 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 10:51:42.616644    9014 start.go:340] cluster config:
	{Name:kubernetes-upgrade-633000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:51:42.621346    9014 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:51:42.626509    9014 out.go:177] * Starting "kubernetes-upgrade-633000" primary control-plane node in "kubernetes-upgrade-633000" cluster
	I0408 10:51:42.630470    9014 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:51:42.630485    9014 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 10:51:42.630493    9014 cache.go:56] Caching tarball of preloaded images
	I0408 10:51:42.630549    9014 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:51:42.630555    9014 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 10:51:42.630605    9014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kubernetes-upgrade-633000/config.json ...
	I0408 10:51:42.630620    9014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kubernetes-upgrade-633000/config.json: {Name:mk08814ca88901ac16c4f473a812565dc8767898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:51:42.630845    9014 start.go:360] acquireMachinesLock for kubernetes-upgrade-633000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:51:42.630881    9014 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "kubernetes-upgrade-633000"
	I0408 10:51:42.630894    9014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-633000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:51:42.630918    9014 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:51:42.639506    9014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:51:42.666828    9014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-633000" (driver="qemu2")
	I0408 10:51:42.666861    9014 client.go:168] LocalClient.Create starting
	I0408 10:51:42.666935    9014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:51:42.666983    9014 main.go:141] libmachine: Decoding PEM data...
	I0408 10:51:42.666991    9014 main.go:141] libmachine: Parsing certificate...
	I0408 10:51:42.667032    9014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:51:42.667054    9014 main.go:141] libmachine: Decoding PEM data...
	I0408 10:51:42.667060    9014 main.go:141] libmachine: Parsing certificate...
	I0408 10:51:42.667405    9014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:51:42.830977    9014 main.go:141] libmachine: Creating SSH key...
	I0408 10:51:43.161839    9014 main.go:141] libmachine: Creating Disk image...
	I0408 10:51:43.161852    9014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:51:43.162160    9014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:43.176020    9014 main.go:141] libmachine: STDOUT: 
	I0408 10:51:43.176047    9014 main.go:141] libmachine: STDERR: 
	I0408 10:51:43.176124    9014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2 +20000M
	I0408 10:51:43.187399    9014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:51:43.187425    9014 main.go:141] libmachine: STDERR: 
	I0408 10:51:43.187439    9014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:43.187445    9014 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:51:43.187486    9014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:7a:7a:9c:44:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:43.189198    9014 main.go:141] libmachine: STDOUT: 
	I0408 10:51:43.189216    9014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:51:43.189239    9014 client.go:171] duration metric: took 522.367208ms to LocalClient.Create
	I0408 10:51:45.191498    9014 start.go:128] duration metric: took 2.560529917s to createHost
	I0408 10:51:45.191575    9014 start.go:83] releasing machines lock for "kubernetes-upgrade-633000", held for 2.560666333s
	W0408 10:51:45.191649    9014 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:51:45.208381    9014 out.go:177] * Deleting "kubernetes-upgrade-633000" in qemu2 ...
	W0408 10:51:45.236994    9014 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:51:45.237047    9014 start.go:728] Will try again in 5 seconds ...
	I0408 10:51:50.238422    9014 start.go:360] acquireMachinesLock for kubernetes-upgrade-633000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:51:50.238667    9014 start.go:364] duration metric: took 196.125µs to acquireMachinesLock for "kubernetes-upgrade-633000"
	I0408 10:51:50.238746    9014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-633000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:51:50.238882    9014 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:51:50.247488    9014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 10:51:50.274808    9014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-633000" (driver="qemu2")
	I0408 10:51:50.274850    9014 client.go:168] LocalClient.Create starting
	I0408 10:51:50.274930    9014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:51:50.274977    9014 main.go:141] libmachine: Decoding PEM data...
	I0408 10:51:50.274989    9014 main.go:141] libmachine: Parsing certificate...
	I0408 10:51:50.275031    9014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:51:50.275064    9014 main.go:141] libmachine: Decoding PEM data...
	I0408 10:51:50.275081    9014 main.go:141] libmachine: Parsing certificate...
	I0408 10:51:50.275480    9014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:51:50.431477    9014 main.go:141] libmachine: Creating SSH key...
	I0408 10:51:50.589181    9014 main.go:141] libmachine: Creating Disk image...
	I0408 10:51:50.589192    9014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:51:50.589473    9014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:50.602442    9014 main.go:141] libmachine: STDOUT: 
	I0408 10:51:50.602475    9014 main.go:141] libmachine: STDERR: 
	I0408 10:51:50.602548    9014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2 +20000M
	I0408 10:51:50.613536    9014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:51:50.613565    9014 main.go:141] libmachine: STDERR: 
	I0408 10:51:50.613584    9014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:50.613589    9014 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:51:50.613628    9014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cf:5e:f0:28:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:50.615490    9014 main.go:141] libmachine: STDOUT: 
	I0408 10:51:50.615519    9014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:51:50.615535    9014 client.go:171] duration metric: took 340.677667ms to LocalClient.Create
	I0408 10:51:52.617736    9014 start.go:128] duration metric: took 2.378809125s to createHost
	I0408 10:51:52.617863    9014 start.go:83] releasing machines lock for "kubernetes-upgrade-633000", held for 2.3790885s
	W0408 10:51:52.618187    9014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-633000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-633000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:51:52.625598    9014 out.go:177] 
	W0408 10:51:52.633690    9014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:51:52.633725    9014 out.go:239] * 
	* 
	W0408 10:51:52.636743    9014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:51:52.646670    9014 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-633000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-633000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-633000: (2.111413583s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-633000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-633000 status --format={{.Host}}: exit status 7 (60.442041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-633000 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-633000 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.198448791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-633000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-633000" primary control-plane node in "kubernetes-upgrade-633000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-633000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-633000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:51:54.868044    9045 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:51:54.868162    9045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:51:54.868165    9045 out.go:304] Setting ErrFile to fd 2...
	I0408 10:51:54.868175    9045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:51:54.868300    9045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:51:54.869358    9045 out.go:298] Setting JSON to false
	I0408 10:51:54.885637    9045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6684,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:51:54.885694    9045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:51:54.890189    9045 out.go:177] * [kubernetes-upgrade-633000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:51:54.897198    9045 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:51:54.900150    9045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:51:54.897280    9045 notify.go:220] Checking for updates...
	I0408 10:51:54.907158    9045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:51:54.910196    9045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:51:54.916144    9045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:51:54.924130    9045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:51:54.928398    9045 config.go:182] Loaded profile config "kubernetes-upgrade-633000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0408 10:51:54.928642    9045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:51:54.933140    9045 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:51:54.945115    9045 start.go:297] selected driver: qemu2
	I0408 10:51:54.945120    9045 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-633000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:51:54.945167    9045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:51:54.947551    9045 cni.go:84] Creating CNI manager for ""
	I0408 10:51:54.947570    9045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:51:54.947589    9045 start.go:340] cluster config:
	{Name:kubernetes-upgrade-633000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:kubernetes-upgrade-633000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:51:54.951827    9045 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:51:54.956187    9045 out.go:177] * Starting "kubernetes-upgrade-633000" primary control-plane node in "kubernetes-upgrade-633000" cluster
	I0408 10:51:54.963153    9045 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 10:51:54.963166    9045 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0408 10:51:54.963172    9045 cache.go:56] Caching tarball of preloaded images
	I0408 10:51:54.963223    9045 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:51:54.963229    9045 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on docker
	I0408 10:51:54.963284    9045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kubernetes-upgrade-633000/config.json ...
	I0408 10:51:54.963621    9045 start.go:360] acquireMachinesLock for kubernetes-upgrade-633000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:51:54.963646    9045 start.go:364] duration metric: took 19.083µs to acquireMachinesLock for "kubernetes-upgrade-633000"
	I0408 10:51:54.963654    9045 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:51:54.963660    9045 fix.go:54] fixHost starting: 
	I0408 10:51:54.963767    9045 fix.go:112] recreateIfNeeded on kubernetes-upgrade-633000: state=Stopped err=<nil>
	W0408 10:51:54.963774    9045 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:51:54.972122    9045 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-633000" ...
	I0408 10:51:54.975185    9045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cf:5e:f0:28:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:51:54.977053    9045 main.go:141] libmachine: STDOUT: 
	I0408 10:51:54.977076    9045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:51:54.977105    9045 fix.go:56] duration metric: took 13.443459ms for fixHost
	I0408 10:51:54.977109    9045 start.go:83] releasing machines lock for "kubernetes-upgrade-633000", held for 13.458792ms
	W0408 10:51:54.977114    9045 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:51:54.977141    9045 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:51:54.977145    9045 start.go:728] Will try again in 5 seconds ...
	I0408 10:51:59.979348    9045 start.go:360] acquireMachinesLock for kubernetes-upgrade-633000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:51:59.979932    9045 start.go:364] duration metric: took 469.583µs to acquireMachinesLock for "kubernetes-upgrade-633000"
	I0408 10:51:59.980096    9045 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:51:59.980118    9045 fix.go:54] fixHost starting: 
	I0408 10:51:59.980870    9045 fix.go:112] recreateIfNeeded on kubernetes-upgrade-633000: state=Stopped err=<nil>
	W0408 10:51:59.980897    9045 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:51:59.987940    9045 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-633000" ...
	I0408 10:51:59.992475    9045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cf:5e:f0:28:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubernetes-upgrade-633000/disk.qcow2
	I0408 10:52:00.002582    9045 main.go:141] libmachine: STDOUT: 
	I0408 10:52:00.002658    9045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:52:00.002729    9045 fix.go:56] duration metric: took 22.616792ms for fixHost
	I0408 10:52:00.002745    9045 start.go:83] releasing machines lock for "kubernetes-upgrade-633000", held for 22.791291ms
	W0408 10:52:00.002995    9045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-633000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-633000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:52:00.006357    9045 out.go:177] 
	W0408 10:52:00.009329    9045 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:52:00.009373    9045 out.go:239] * 
	* 
	W0408 10:52:00.011683    9045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:52:00.022272    9045 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-633000 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-633000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-633000 version --output=json: exit status 1 (59.341917ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-633000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-08 10:52:00.095367 -0700 PDT m=+988.977413584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-633000 -n kubernetes-upgrade-633000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-633000 -n kubernetes-upgrade-633000: exit status 7 (35.2575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-633000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-633000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-633000
--- FAIL: TestKubernetesUpgrade (17.72s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.63s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18585
- KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current849446312/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.25s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18585
- KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1004351458/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2833889918 start -p stopped-upgrade-476000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2833889918 start -p stopped-upgrade-476000 --memory=2200 --vm-driver=qemu2 : (39.151569875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2833889918 -p stopped-upgrade-476000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2833889918 -p stopped-upgrade-476000 stop: (12.112639875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-476000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-476000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.049773125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-476000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-476000" primary control-plane node in "stopped-upgrade-476000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-476000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:52:52.588913    9084 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:52:52.589061    9084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:52:52.589065    9084 out.go:304] Setting ErrFile to fd 2...
	I0408 10:52:52.589068    9084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:52:52.589232    9084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:52:52.590387    9084 out.go:298] Setting JSON to false
	I0408 10:52:52.609840    9084 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6742,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:52:52.609899    9084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:52:52.614702    9084 out.go:177] * [stopped-upgrade-476000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:52:52.622648    9084 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:52:52.627604    9084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:52:52.622702    9084 notify.go:220] Checking for updates...
	I0408 10:52:52.633616    9084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:52:52.636636    9084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:52:52.639568    9084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:52:52.642637    9084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:52:52.645917    9084 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:52:52.649569    9084 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 10:52:52.652634    9084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:52:52.655554    9084 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:52:52.662614    9084 start.go:297] selected driver: qemu2
	I0408 10:52:52.662621    9084 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:52:52.662677    9084 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:52:52.665456    9084 cni.go:84] Creating CNI manager for ""
	I0408 10:52:52.665471    9084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:52:52.665494    9084 start.go:340] cluster config:
	{Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:52:52.665547    9084 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:52:52.672603    9084 out.go:177] * Starting "stopped-upgrade-476000" primary control-plane node in "stopped-upgrade-476000" cluster
	I0408 10:52:52.676666    9084 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 10:52:52.676684    9084 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0408 10:52:52.676696    9084 cache.go:56] Caching tarball of preloaded images
	I0408 10:52:52.676755    9084 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:52:52.676760    9084 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0408 10:52:52.676819    9084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/config.json ...
	I0408 10:52:52.677355    9084 start.go:360] acquireMachinesLock for stopped-upgrade-476000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:52:52.677387    9084 start.go:364] duration metric: took 25.709µs to acquireMachinesLock for "stopped-upgrade-476000"
	I0408 10:52:52.677395    9084 start.go:96] Skipping create...Using existing machine configuration
	I0408 10:52:52.677399    9084 fix.go:54] fixHost starting: 
	I0408 10:52:52.677512    9084 fix.go:112] recreateIfNeeded on stopped-upgrade-476000: state=Stopped err=<nil>
	W0408 10:52:52.677520    9084 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 10:52:52.684634    9084 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-476000" ...
	I0408 10:52:52.688722    9084 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51442-:22,hostfwd=tcp::51443-:2376,hostname=stopped-upgrade-476000 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/disk.qcow2
	I0408 10:52:52.737553    9084 main.go:141] libmachine: STDOUT: 
	I0408 10:52:52.737582    9084 main.go:141] libmachine: STDERR: 
	I0408 10:52:52.737588    9084 main.go:141] libmachine: Waiting for VM to start (ssh -p 51442 docker@127.0.0.1)...
	I0408 10:53:13.430943    9084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/config.json ...
	I0408 10:53:13.431749    9084 machine.go:94] provisionDockerMachine start ...
	I0408 10:53:13.431965    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.432520    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.432538    9084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 10:53:13.512780    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 10:53:13.512825    9084 buildroot.go:166] provisioning hostname "stopped-upgrade-476000"
	I0408 10:53:13.512946    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.513182    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.513194    9084 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-476000 && echo "stopped-upgrade-476000" | sudo tee /etc/hostname
	I0408 10:53:13.590866    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-476000
	
	I0408 10:53:13.590959    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.591144    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.591156    9084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 10:53:13.658265    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 10:53:13.658279    9084 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18585-6624/.minikube CaCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18585-6624/.minikube}
	I0408 10:53:13.658287    9084 buildroot.go:174] setting up certificates
	I0408 10:53:13.658293    9084 provision.go:84] configureAuth start
	I0408 10:53:13.658298    9084 provision.go:143] copyHostCerts
	I0408 10:53:13.658379    9084 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem, removing ...
	I0408 10:53:13.658387    9084 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem
	I0408 10:53:13.658505    9084 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.pem (1082 bytes)
	I0408 10:53:13.658728    9084 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem, removing ...
	I0408 10:53:13.658733    9084 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem
	I0408 10:53:13.658795    9084 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/cert.pem (1123 bytes)
	I0408 10:53:13.658935    9084 exec_runner.go:144] found /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem, removing ...
	I0408 10:53:13.658939    9084 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem
	I0408 10:53:13.658995    9084 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18585-6624/.minikube/key.pem (1675 bytes)
	I0408 10:53:13.659114    9084 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-476000 san=[127.0.0.1 localhost minikube stopped-upgrade-476000]
	I0408 10:53:13.702988    9084 provision.go:177] copyRemoteCerts
	I0408 10:53:13.703026    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 10:53:13.703032    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:53:13.736627    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 10:53:13.743433    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 10:53:13.749956    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 10:53:13.757393    9084 provision.go:87] duration metric: took 99.088875ms to configureAuth
	I0408 10:53:13.757402    9084 buildroot.go:189] setting minikube options for container-runtime
	I0408 10:53:13.757522    9084 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:53:13.757558    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.757647    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.757652    9084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 10:53:13.819282    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 10:53:13.819289    9084 buildroot.go:70] root file system type: tmpfs
	I0408 10:53:13.819342    9084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 10:53:13.819391    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.819504    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.819539    9084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 10:53:13.882506    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 10:53:13.882552    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:13.882674    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:13.882683    9084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 10:53:14.253880    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 10:53:14.253893    9084 machine.go:97] duration metric: took 822.128084ms to provisionDockerMachine
	I0408 10:53:14.253899    9084 start.go:293] postStartSetup for "stopped-upgrade-476000" (driver="qemu2")
	I0408 10:53:14.253914    9084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 10:53:14.253991    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 10:53:14.254000    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:53:14.288755    9084 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 10:53:14.290065    9084 info.go:137] Remote host: Buildroot 2021.02.12
	I0408 10:53:14.290076    9084 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18585-6624/.minikube/addons for local assets ...
	I0408 10:53:14.290156    9084 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18585-6624/.minikube/files for local assets ...
	I0408 10:53:14.290267    9084 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem -> 70432.pem in /etc/ssl/certs
	I0408 10:53:14.290388    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 10:53:14.293411    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem --> /etc/ssl/certs/70432.pem (1708 bytes)
	I0408 10:53:14.300606    9084 start.go:296] duration metric: took 46.693458ms for postStartSetup
	I0408 10:53:14.300622    9084 fix.go:56] duration metric: took 21.623118917s for fixHost
	I0408 10:53:14.300656    9084 main.go:141] libmachine: Using SSH client type: native
	I0408 10:53:14.300755    9084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fedc80] 0x102ff04e0 <nil>  [] 0s} localhost 51442 <nil> <nil>}
	I0408 10:53:14.300762    9084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 10:53:14.358564    9084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712598794.516776629
	
	I0408 10:53:14.358571    9084 fix.go:216] guest clock: 1712598794.516776629
	I0408 10:53:14.358575    9084 fix.go:229] Guest: 2024-04-08 10:53:14.516776629 -0700 PDT Remote: 2024-04-08 10:53:14.300624 -0700 PDT m=+21.748189376 (delta=216.152629ms)
	I0408 10:53:14.358585    9084 fix.go:200] guest clock delta is within tolerance: 216.152629ms
	I0408 10:53:14.358588    9084 start.go:83] releasing machines lock for "stopped-upgrade-476000", held for 21.681092417s
	I0408 10:53:14.358655    9084 ssh_runner.go:195] Run: cat /version.json
	I0408 10:53:14.358658    9084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 10:53:14.358664    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:53:14.358675    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	W0408 10:53:14.359242    9084 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51442: connect: connection refused
	I0408 10:53:14.359269    9084 retry.go:31] will retry after 374.088625ms: dial tcp [::1]:51442: connect: connection refused
	W0408 10:53:14.387763    9084 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0408 10:53:14.387819    9084 ssh_runner.go:195] Run: systemctl --version
	I0408 10:53:14.389534    9084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 10:53:14.391196    9084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 10:53:14.391224    9084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0408 10:53:14.394027    9084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0408 10:53:14.399037    9084 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 10:53:14.399045    9084 start.go:494] detecting cgroup driver to use...
	I0408 10:53:14.399121    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 10:53:14.405992    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0408 10:53:14.409560    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 10:53:14.412870    9084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 10:53:14.412897    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 10:53:14.416290    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 10:53:14.419055    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 10:53:14.421953    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 10:53:14.425258    9084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 10:53:14.428834    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 10:53:14.431899    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 10:53:14.434526    9084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 10:53:14.437639    9084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 10:53:14.440795    9084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 10:53:14.443406    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:14.523245    9084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 10:53:14.528901    9084 start.go:494] detecting cgroup driver to use...
	I0408 10:53:14.528959    9084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 10:53:14.534883    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 10:53:14.540572    9084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 10:53:14.548860    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 10:53:14.553331    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 10:53:14.558019    9084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 10:53:14.617399    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 10:53:14.622690    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 10:53:14.628689    9084 ssh_runner.go:195] Run: which cri-dockerd
	I0408 10:53:14.629889    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 10:53:14.632690    9084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0408 10:53:14.637535    9084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 10:53:14.715819    9084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 10:53:14.796133    9084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 10:53:14.796197    9084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 10:53:14.801878    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:14.878019    9084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 10:53:16.032286    9084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154245917s)
	I0408 10:53:16.032364    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 10:53:16.037429    9084 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0408 10:53:16.042586    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 10:53:16.047469    9084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 10:53:16.123738    9084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 10:53:16.204605    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:16.279773    9084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 10:53:16.285349    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 10:53:16.289850    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:16.370647    9084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 10:53:16.411442    9084 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 10:53:16.411530    9084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 10:53:16.415036    9084 start.go:562] Will wait 60s for crictl version
	I0408 10:53:16.415095    9084 ssh_runner.go:195] Run: which crictl
	I0408 10:53:16.416417    9084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 10:53:16.431128    9084 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0408 10:53:16.431197    9084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 10:53:16.448055    9084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 10:53:16.472638    9084 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0408 10:53:16.472704    9084 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0408 10:53:16.473952    9084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 10:53:16.477797    9084 kubeadm.go:877] updating cluster {Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0408 10:53:16.477846    9084 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0408 10:53:16.477886    9084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 10:53:16.489035    9084 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 10:53:16.489045    9084 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 10:53:16.489095    9084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 10:53:16.492383    9084 ssh_runner.go:195] Run: which lz4
	I0408 10:53:16.493697    9084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 10:53:16.494802    9084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 10:53:16.494814    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0408 10:53:17.251143    9084 docker.go:649] duration metric: took 757.476417ms to copy over tarball
	I0408 10:53:17.251215    9084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 10:53:18.424761    9084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.173523375s)
	I0408 10:53:18.424773    9084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 10:53:18.440203    9084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 10:53:18.442935    9084 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0408 10:53:18.447945    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:18.515788    9084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 10:53:20.061243    9084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.545428292s)
	I0408 10:53:20.061332    9084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 10:53:20.076424    9084 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 10:53:20.076439    9084 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0408 10:53:20.076444    9084 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 10:53:20.083205    9084 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.083202    9084 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.083298    9084 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.083429    9084 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.083476    9084 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0408 10:53:20.083529    9084 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.083720    9084 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.084038    9084 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.093093    9084 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.093168    9084 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 10:53:20.093930    9084 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.093944    9084 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.094016    9084 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.094034    9084 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.094054    9084 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.094132    9084 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.479844    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 10:53:20.495484    9084 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0408 10:53:20.495506    9084 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0408 10:53:20.495554    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0408 10:53:20.504624    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.505760    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0408 10:53:20.505850    9084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0408 10:53:20.514362    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.515572    9084 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0408 10:53:20.515591    9084 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.515598    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0408 10:53:20.515619    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0408 10:53:20.515631    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0408 10:53:20.520364    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.530224    9084 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0408 10:53:20.530245    9084 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.530310    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0408 10:53:20.534854    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0408 10:53:20.538033    9084 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 10:53:20.538046    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0408 10:53:20.541059    9084 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0408 10:53:20.541081    9084 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.541141    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0408 10:53:20.545938    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0408 10:53:20.552236    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.579693    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0408 10:53:20.579734    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0408 10:53:20.579779    9084 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0408 10:53:20.579796    9084 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0408 10:53:20.579841    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0408 10:53:20.588371    9084 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0408 10:53:20.588499    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.589815    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0408 10:53:20.599263    9084 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0408 10:53:20.599290    9084 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.599360    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 10:53:20.609421    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 10:53:20.609545    9084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0408 10:53:20.610907    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0408 10:53:20.610920    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0408 10:53:20.642983    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.645612    9084 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 10:53:20.645630    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0408 10:53:20.653196    9084 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0408 10:53:20.653218    9084 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.653272    9084 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0408 10:53:20.689328    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 10:53:20.689393    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0408 10:53:20.689486    9084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0408 10:53:20.691006    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0408 10:53:20.691019    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0408 10:53:20.834015    9084 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0408 10:53:20.834118    9084 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.857868    9084 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0408 10:53:20.857894    9084 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.857950    9084 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:53:20.871711    9084 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 10:53:20.871728    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0408 10:53:20.879249    9084 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 10:53:20.879371    9084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0408 10:53:21.019587    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 10:53:21.019634    9084 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0408 10:53:21.019661    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0408 10:53:21.047293    9084 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 10:53:21.047309    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0408 10:53:21.285508    9084 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 10:53:21.285549    9084 cache_images.go:92] duration metric: took 1.20909225s to LoadCachedImages
	W0408 10:53:21.285587    9084 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0408 10:53:21.285600    9084 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0408 10:53:21.285656    9084 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 10:53:21.285720    9084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 10:53:21.299346    9084 cni.go:84] Creating CNI manager for ""
	I0408 10:53:21.299360    9084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:53:21.299365    9084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 10:53:21.299374    9084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-476000 NodeName:stopped-upgrade-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 10:53:21.299444    9084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-476000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 10:53:21.299514    9084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0408 10:53:21.302278    9084 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 10:53:21.302304    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 10:53:21.305184    9084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0408 10:53:21.310073    9084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 10:53:21.314981    9084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0408 10:53:21.320237    9084 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0408 10:53:21.321355    9084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 10:53:21.324744    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:53:21.391192    9084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:53:21.406274    9084 certs.go:68] Setting up /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000 for IP: 10.0.2.15
	I0408 10:53:21.406284    9084 certs.go:194] generating shared ca certs ...
	I0408 10:53:21.406293    9084 certs.go:226] acquiring lock for ca certs: {Name:mkfcdee1cac51c6f74fa377d8d75e68d66123e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.406452    9084 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.key
	I0408 10:53:21.406501    9084 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.key
	I0408 10:53:21.406506    9084 certs.go:256] generating profile certs ...
	I0408 10:53:21.406604    9084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.key
	I0408 10:53:21.406621    9084 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07
	I0408 10:53:21.406643    9084 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0408 10:53:21.503350    9084 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07 ...
	I0408 10:53:21.503366    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07: {Name:mk157ba66346fcfc45e97c4ae63aceb5f9cbdb80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.503698    9084 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07 ...
	I0408 10:53:21.503704    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07: {Name:mk4f23acaf4862cb3acdffcb9c85638e6ba51c52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.503836    9084 certs.go:381] copying /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt.b209df07 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt
	I0408 10:53:21.503955    9084 certs.go:385] copying /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key.b209df07 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key
	I0408 10:53:21.504094    9084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/proxy-client.key
	I0408 10:53:21.504219    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043.pem (1338 bytes)
	W0408 10:53:21.504246    9084 certs.go:480] ignoring /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043_empty.pem, impossibly tiny 0 bytes
	I0408 10:53:21.504250    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 10:53:21.504268    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem (1082 bytes)
	I0408 10:53:21.504289    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem (1123 bytes)
	I0408 10:53:21.504306    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/key.pem (1675 bytes)
	I0408 10:53:21.504342    9084 certs.go:484] found cert: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem (1708 bytes)
	I0408 10:53:21.504667    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 10:53:21.511520    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 10:53:21.518160    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 10:53:21.525469    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 10:53:21.534769    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 10:53:21.541760    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 10:53:21.549220    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 10:53:21.555842    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 10:53:21.562361    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/ssl/certs/70432.pem --> /usr/share/ca-certificates/70432.pem (1708 bytes)
	I0408 10:53:21.569482    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 10:53:21.576292    9084 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/7043.pem --> /usr/share/ca-certificates/7043.pem (1338 bytes)
	I0408 10:53:21.582771    9084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 10:53:21.587705    9084 ssh_runner.go:195] Run: openssl version
	I0408 10:53:21.589418    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7043.pem && ln -fs /usr/share/ca-certificates/7043.pem /etc/ssl/certs/7043.pem"
	I0408 10:53:21.592648    9084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7043.pem
	I0408 10:53:21.594034    9084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 17:36 /usr/share/ca-certificates/7043.pem
	I0408 10:53:21.594051    9084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7043.pem
	I0408 10:53:21.595877    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7043.pem /etc/ssl/certs/51391683.0"
	I0408 10:53:21.598527    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70432.pem && ln -fs /usr/share/ca-certificates/70432.pem /etc/ssl/certs/70432.pem"
	I0408 10:53:21.601727    9084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70432.pem
	I0408 10:53:21.603131    9084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 17:36 /usr/share/ca-certificates/70432.pem
	I0408 10:53:21.603146    9084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70432.pem
	I0408 10:53:21.604802    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70432.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 10:53:21.607588    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 10:53:21.610333    9084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:53:21.611675    9084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 17:49 /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:53:21.611692    9084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 10:53:21.613272    9084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 10:53:21.616229    9084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 10:53:21.617546    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 10:53:21.619458    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 10:53:21.621135    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 10:53:21.622941    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 10:53:21.624622    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 10:53:21.626305    9084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 10:53:21.628180    9084 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51476 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0408 10:53:21.628241    9084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 10:53:21.638560    9084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 10:53:21.641730    9084 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 10:53:21.641737    9084 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 10:53:21.641740    9084 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 10:53:21.641765    9084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 10:53:21.645114    9084 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 10:53:21.645410    9084 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-476000" does not appear in /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:53:21.645503    9084 kubeconfig.go:62] /Users/jenkins/minikube-integration/18585-6624/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-476000" cluster setting kubeconfig missing "stopped-upgrade-476000" context setting]
	I0408 10:53:21.645694    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:53:21.646112    9084 kapi.go:59] client config for stopped-upgrade-476000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042e3a70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:53:21.646414    9084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 10:53:21.649269    9084 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-476000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0408 10:53:21.649275    9084 kubeadm.go:1154] stopping kube-system containers ...
	I0408 10:53:21.649311    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 10:53:21.659773    9084 docker.go:483] Stopping containers: [3fcb068b7c04 57d2272b22f0 45e06afd7b3e d3d7a66c7373 c3d8e8e2e6e0 b25ec593bc5b 94347bae0439 f8feaed80a64]
	I0408 10:53:21.659836    9084 ssh_runner.go:195] Run: docker stop 3fcb068b7c04 57d2272b22f0 45e06afd7b3e d3d7a66c7373 c3d8e8e2e6e0 b25ec593bc5b 94347bae0439 f8feaed80a64
	I0408 10:53:21.670456    9084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 10:53:21.676278    9084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:53:21.678875    9084 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 10:53:21.678880    9084 kubeadm.go:156] found existing configuration files:
	
	I0408 10:53:21.678901    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0408 10:53:21.681683    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 10:53:21.681704    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:53:21.684624    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0408 10:53:21.686888    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 10:53:21.686908    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:53:21.689896    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0408 10:53:21.692877    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 10:53:21.692901    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:53:21.695459    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0408 10:53:21.697963    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 10:53:21.697991    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:53:21.700934    9084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:53:21.703503    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:21.725160    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.133590    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.276650    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.298064    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 10:53:22.317886    9084 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:53:22.317967    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:22.820366    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:23.320084    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:53:23.331213    9084 api_server.go:72] duration metric: took 1.01332225s to wait for apiserver process to appear ...
	I0408 10:53:23.331228    9084 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:53:23.331236    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:28.333374    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:28.333396    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:33.333678    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:33.333725    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:38.334109    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:38.334160    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:43.334779    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:43.334826    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:48.335468    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:48.335501    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:53.336326    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:53.336347    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:53:58.337359    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:53:58.337432    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:03.338308    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:03.338404    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:08.340836    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:08.340859    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:13.342295    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:13.342343    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:18.344734    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:18.344766    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:23.346107    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:23.346387    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:23.372684    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:23.372790    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:23.388871    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:23.388953    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:23.401701    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:23.401777    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:23.412860    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:23.412944    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:23.422670    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:23.422736    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:23.434098    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:23.434170    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:23.444453    9084 logs.go:276] 0 containers: []
	W0408 10:54:23.444470    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:23.444522    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:23.454687    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:23.454703    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:23.454708    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:23.471849    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:23.471860    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:23.483521    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:23.483534    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:23.523199    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:23.523210    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:23.535677    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:23.535691    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:23.551159    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:23.551171    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:23.562707    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:23.562719    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:23.588624    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:23.588635    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:23.629875    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:23.629886    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:23.649502    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:23.649519    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:23.664613    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:23.664627    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:23.675819    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:23.675830    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:23.687843    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:23.687853    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:23.692446    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:23.692455    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:23.804585    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:23.804599    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:23.818851    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:23.818862    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:26.333686    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:31.334069    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:31.334323    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:31.371242    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:31.371401    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:31.390676    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:31.390776    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:31.404629    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:31.404723    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:31.416966    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:31.417042    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:31.428429    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:31.428489    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:31.438836    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:31.438905    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:31.450262    9084 logs.go:276] 0 containers: []
	W0408 10:54:31.450273    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:31.450340    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:31.460764    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:31.460780    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:31.460796    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:31.464760    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:31.464770    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:31.500015    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:31.500029    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:31.515023    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:31.515033    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:31.539339    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:31.539348    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:31.550865    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:31.550876    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:31.591633    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:31.591649    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:31.604029    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:31.604040    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:31.621850    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:31.621861    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:31.635376    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:31.635390    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:31.649656    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:31.649666    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:31.663917    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:31.663931    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:31.677567    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:31.677580    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:31.719049    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:31.719064    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:31.731411    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:31.731422    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:31.748753    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:31.748766    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:34.262619    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:39.264456    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:39.264697    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:39.281980    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:39.282081    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:39.295774    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:39.295863    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:39.309500    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:39.309580    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:39.323692    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:39.323761    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:39.334363    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:39.334433    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:39.344540    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:39.344611    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:39.354668    9084 logs.go:276] 0 containers: []
	W0408 10:54:39.354679    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:39.354742    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:39.367225    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:39.367246    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:39.367251    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:39.380935    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:39.380945    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:39.392593    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:39.392603    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:39.410552    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:39.410563    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:39.434507    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:39.434518    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:39.448209    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:39.448218    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:39.459807    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:39.459818    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:39.475562    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:39.475580    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:39.487689    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:39.487709    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:39.499810    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:39.499819    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:39.538987    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:39.539004    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:39.543662    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:39.543686    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:39.581797    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:39.581815    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:39.593337    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:39.593353    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:39.632209    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:39.632221    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:39.646678    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:39.646692    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:42.162542    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:47.165185    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:47.165386    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:47.177576    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:47.177669    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:47.187935    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:47.188004    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:47.198159    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:47.198228    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:47.211431    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:47.211505    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:47.226247    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:47.226312    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:47.236942    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:47.237024    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:47.249342    9084 logs.go:276] 0 containers: []
	W0408 10:54:47.249356    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:47.249426    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:47.259606    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:47.259624    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:47.259629    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:47.275415    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:47.275426    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:47.317378    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:47.317388    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:47.331947    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:47.331957    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:47.345946    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:47.345957    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:47.359309    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:47.359322    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:47.371082    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:47.371092    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:47.409387    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:47.409396    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:47.413581    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:47.413587    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:47.428518    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:47.428528    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:47.440195    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:47.440205    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:47.464438    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:47.464450    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:47.479195    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:47.479205    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:47.493644    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:47.493654    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:47.505402    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:47.505412    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:47.548107    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:47.548119    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:50.067306    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:54:55.069968    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:54:55.070472    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:54:55.108688    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:54:55.108814    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:54:55.130103    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:54:55.130232    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:54:55.147189    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:54:55.147261    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:54:55.163458    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:54:55.163537    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:54:55.174175    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:54:55.174249    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:54:55.184859    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:54:55.184924    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:54:55.195346    9084 logs.go:276] 0 containers: []
	W0408 10:54:55.195361    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:54:55.195421    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:54:55.205551    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:54:55.205567    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:54:55.205572    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:54:55.217322    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:54:55.217333    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:54:55.241517    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:54:55.241524    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:54:55.256757    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:54:55.256769    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:54:55.275007    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:54:55.275018    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:54:55.286865    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:54:55.286879    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:54:55.298909    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:54:55.298919    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:54:55.311334    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:54:55.311345    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:54:55.325200    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:54:55.325209    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:54:55.361897    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:54:55.361907    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:54:55.366602    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:54:55.366610    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:54:55.403655    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:54:55.403667    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:54:55.417584    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:54:55.417594    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:54:55.436919    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:54:55.436929    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:54:55.474545    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:54:55.474555    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:54:55.489204    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:54:55.489215    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:54:58.003264    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:03.005911    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:03.006284    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:03.039309    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:03.039446    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:03.057761    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:03.057848    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:03.082004    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:03.082087    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:03.093327    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:03.093397    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:03.103605    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:03.103676    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:03.114428    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:03.114496    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:03.124664    9084 logs.go:276] 0 containers: []
	W0408 10:55:03.124675    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:03.124738    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:03.135279    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:03.135297    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:03.135302    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:03.153267    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:03.153279    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:03.177170    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:03.177178    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:03.188564    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:03.188574    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:03.202640    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:03.202650    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:03.214102    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:03.214114    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:03.228895    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:03.228906    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:03.240131    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:03.240143    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:03.254464    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:03.254474    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:03.266342    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:03.266351    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:03.303547    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:03.303562    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:03.342461    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:03.342478    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:03.354433    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:03.354445    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:03.359253    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:03.359261    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:03.373241    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:03.373251    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:03.410122    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:03.410133    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:05.925775    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:10.928208    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:10.928385    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:10.943391    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:10.943498    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:10.955006    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:10.955072    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:10.965199    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:10.965265    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:10.976203    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:10.976277    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:10.986779    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:10.986841    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:10.997121    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:10.997185    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:11.007337    9084 logs.go:276] 0 containers: []
	W0408 10:55:11.007349    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:11.007411    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:11.017718    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:11.017735    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:11.017741    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:11.029688    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:11.029699    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:11.043819    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:11.043829    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:11.055420    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:11.055430    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:11.080056    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:11.080063    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:11.099898    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:11.099908    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:11.114062    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:11.114075    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:11.128974    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:11.128987    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:11.146387    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:11.146397    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:11.187358    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:11.187368    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:11.200981    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:11.200994    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:11.211916    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:11.211928    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:11.223581    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:11.223591    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:11.235021    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:11.235031    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:11.272573    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:11.272582    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:11.276786    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:11.276796    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:13.815132    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:18.818054    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:18.818792    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:18.861661    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:18.861773    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:18.880570    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:18.880659    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:18.895585    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:18.895665    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:18.907916    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:18.907983    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:18.925063    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:18.925141    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:18.937342    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:18.937418    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:18.947561    9084 logs.go:276] 0 containers: []
	W0408 10:55:18.947570    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:18.947622    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:18.958314    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:18.958333    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:18.958339    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:18.969940    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:18.969954    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:18.981749    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:18.981763    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:18.985755    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:18.985765    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:18.997900    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:18.997912    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:19.012509    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:19.012519    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:19.023974    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:19.023985    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:19.047774    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:19.047781    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:19.084655    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:19.084668    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:19.105018    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:19.105027    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:19.118479    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:19.118494    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:19.142732    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:19.142742    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:19.161522    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:19.161534    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:19.200711    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:19.200721    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:19.214597    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:19.214610    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:19.252068    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:19.252082    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:21.765195    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:26.767696    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:26.768060    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:26.801740    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:26.801904    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:26.819760    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:26.819852    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:26.833448    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:26.833518    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:26.844656    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:26.844732    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:26.855382    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:26.855463    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:26.866448    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:26.866514    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:26.877049    9084 logs.go:276] 0 containers: []
	W0408 10:55:26.877061    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:26.877114    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:26.887968    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:26.888011    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:26.888018    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:26.925511    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:26.925522    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:26.936814    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:26.936824    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:26.948688    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:26.948700    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:26.966338    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:26.966348    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:26.978123    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:26.978133    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:26.992014    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:26.992024    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:27.010215    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:27.010225    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:27.024571    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:27.024581    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:27.028787    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:27.028797    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:27.087517    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:27.087528    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:27.102752    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:27.102762    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:27.119001    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:27.119011    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:27.142248    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:27.142254    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:27.181980    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:27.181992    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:27.197760    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:27.197772    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:29.713674    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:34.716031    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:34.716227    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:34.732832    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:34.732916    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:34.746309    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:34.746384    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:34.758430    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:34.758496    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:34.769429    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:34.769495    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:34.780070    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:34.780145    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:34.794632    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:34.794706    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:34.804795    9084 logs.go:276] 0 containers: []
	W0408 10:55:34.804811    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:34.804866    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:34.815200    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:34.815218    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:34.815223    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:34.830207    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:34.830221    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:34.841487    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:34.841499    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:34.864486    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:34.864497    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:34.876034    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:34.876047    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:34.890144    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:34.890155    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:34.903382    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:34.903393    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:34.920640    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:34.920650    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:34.955600    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:34.955611    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:34.971849    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:34.971859    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:34.983386    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:34.983396    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:35.020995    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:35.021004    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:35.057032    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:35.057043    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:35.068660    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:35.068672    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:35.083042    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:35.083053    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:35.087618    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:35.087625    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:37.601444    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:42.603754    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:42.603923    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:42.619633    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:42.619715    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:42.629690    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:42.629763    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:42.640627    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:42.640698    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:42.652038    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:42.652110    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:42.662226    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:42.662290    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:42.689548    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:42.689618    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:42.700158    9084 logs.go:276] 0 containers: []
	W0408 10:55:42.700175    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:42.700237    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:42.710719    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:42.710738    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:42.710743    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:42.747176    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:42.747191    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:42.761665    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:42.761678    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:42.776860    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:42.776873    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:42.788347    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:42.788358    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:42.811647    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:42.811654    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:42.823229    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:42.823241    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:42.859790    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:42.859801    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:42.895157    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:42.895170    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:42.912589    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:42.912599    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:42.927328    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:42.927343    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:42.941717    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:42.941728    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:42.953236    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:42.953247    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:42.967422    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:42.967432    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:42.979263    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:42.979274    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:42.983940    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:42.983949    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:45.495341    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:50.498281    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:50.498654    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:50.535494    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:50.535609    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:50.553341    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:50.553434    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:50.567090    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:50.567163    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:50.580385    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:50.580449    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:50.590845    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:50.590916    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:50.601256    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:50.601333    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:50.614147    9084 logs.go:276] 0 containers: []
	W0408 10:55:50.614160    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:50.614222    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:50.624748    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:50.624768    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:50.624775    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:50.636202    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:50.636216    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:50.650000    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:50.650010    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:50.684509    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:50.684520    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:50.699356    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:50.699366    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:50.713457    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:50.713473    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:50.727920    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:50.727930    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:50.740457    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:50.740468    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:50.763582    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:50.763593    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:55:50.775667    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:50.775678    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:50.814174    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:50.814184    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:50.828291    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:50.828301    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:50.846323    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:50.846334    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:50.858031    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:50.858044    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:50.895389    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:50.895403    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:50.906735    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:50.906746    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:53.411771    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:55:58.414277    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:55:58.414529    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:55:58.442497    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:55:58.442608    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:55:58.457918    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:55:58.457999    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:55:58.469883    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:55:58.469956    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:55:58.482772    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:55:58.482847    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:55:58.493729    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:55:58.493798    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:55:58.505951    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:55:58.506027    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:55:58.516139    9084 logs.go:276] 0 containers: []
	W0408 10:55:58.516153    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:55:58.516213    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:55:58.526466    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:55:58.526484    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:55:58.526490    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:55:58.530647    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:55:58.530655    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:55:58.545013    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:55:58.545026    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:55:58.557541    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:55:58.557554    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:55:58.580688    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:55:58.580697    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:55:58.592366    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:55:58.592376    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:55:58.609187    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:55:58.609198    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:55:58.645947    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:55:58.645969    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:55:58.687980    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:55:58.687992    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:55:58.702372    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:55:58.702384    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:55:58.739572    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:55:58.739584    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:55:58.754046    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:55:58.754057    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:55:58.768019    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:55:58.768029    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:55:58.779963    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:55:58.779976    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:55:58.794307    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:55:58.794317    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:55:58.805515    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:55:58.805525    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:01.320118    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:06.322713    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:06.322901    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:06.339065    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:06.339153    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:06.351795    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:06.351869    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:06.364096    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:06.364176    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:06.374934    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:06.375004    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:06.385001    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:06.385069    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:06.395668    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:06.395741    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:06.405519    9084 logs.go:276] 0 containers: []
	W0408 10:56:06.405535    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:06.405592    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:06.415806    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:06.415822    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:06.415827    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:06.454070    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:06.454078    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:06.465899    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:06.465915    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:06.480341    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:06.480357    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:06.491984    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:06.491998    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:06.505256    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:06.505268    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:06.519703    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:06.519715    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:06.533918    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:06.533928    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:06.549229    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:06.549240    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:06.567545    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:06.567561    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:06.582047    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:06.582056    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:06.604176    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:06.604182    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:06.618330    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:06.618341    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:06.630574    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:06.630583    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:06.635112    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:06.635125    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:06.676425    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:06.676443    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:09.218022    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:14.220269    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:14.220498    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:14.238946    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:14.239068    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:14.252455    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:14.252532    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:14.263573    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:14.263649    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:14.278471    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:14.278543    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:14.289071    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:14.289144    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:14.299734    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:14.299804    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:14.309757    9084 logs.go:276] 0 containers: []
	W0408 10:56:14.309771    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:14.309831    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:14.320148    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:14.320166    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:14.320172    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:14.343013    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:14.343021    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:14.353928    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:14.353943    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:14.367939    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:14.367949    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:14.381808    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:14.381818    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:14.396350    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:14.396360    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:14.413349    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:14.413359    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:14.425934    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:14.425945    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:14.462928    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:14.462943    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:14.479127    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:14.479145    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:14.494947    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:14.494961    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:14.502582    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:14.502597    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:14.542854    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:14.542866    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:14.558773    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:14.558784    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:14.572339    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:14.572351    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:14.612729    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:14.612746    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:17.130895    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:22.133585    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:22.133791    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:22.152251    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:22.152352    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:22.166066    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:22.166143    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:22.177445    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:22.177508    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:22.187559    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:22.187628    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:22.198052    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:22.198115    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:22.208998    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:22.209075    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:22.218972    9084 logs.go:276] 0 containers: []
	W0408 10:56:22.218982    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:22.219043    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:22.229361    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:22.229380    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:22.229386    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:22.241230    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:22.241242    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:22.277589    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:22.277598    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:22.298610    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:22.298624    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:22.309757    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:22.309769    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:22.334062    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:22.334079    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:22.354355    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:22.354367    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:22.395627    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:22.395643    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:22.408486    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:22.408499    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:22.424430    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:22.424443    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:22.439239    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:22.439250    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:22.453670    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:22.453680    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:22.465482    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:22.465496    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:22.485035    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:22.485048    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:22.490087    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:22.490100    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:22.530986    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:22.530999    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:25.045952    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:30.046541    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:30.046821    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:30.070583    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:30.070686    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:30.086902    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:30.086978    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:30.099337    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:30.099408    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:30.110773    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:30.110843    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:30.120736    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:30.120802    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:30.130949    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:30.131022    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:30.141143    9084 logs.go:276] 0 containers: []
	W0408 10:56:30.141161    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:30.141221    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:30.151623    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:30.151642    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:30.151648    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:30.170403    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:30.170417    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:30.181532    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:30.181545    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:30.194142    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:30.194153    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:30.216513    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:30.216525    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:30.221314    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:30.221326    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:30.260278    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:30.260295    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:30.299665    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:30.299682    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:30.312421    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:30.312433    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:30.327241    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:30.327255    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:30.346556    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:30.346570    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:30.363809    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:30.363819    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:30.376188    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:30.376197    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:30.420888    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:30.420911    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:30.435675    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:30.435689    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:30.453942    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:30.453959    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:32.980984    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:37.983303    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:37.983439    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:37.998640    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:37.998723    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:38.011489    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:38.011565    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:38.022287    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:38.022357    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:38.032421    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:38.032488    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:38.043086    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:38.043163    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:38.053552    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:38.053628    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:38.064279    9084 logs.go:276] 0 containers: []
	W0408 10:56:38.064292    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:38.064356    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:38.076352    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:38.076372    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:38.076382    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:38.118602    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:38.118618    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:38.133993    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:38.134005    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:38.148948    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:38.148956    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:38.161418    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:38.161430    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:38.201228    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:38.201238    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:38.217080    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:38.217091    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:38.232267    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:38.232278    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:38.271060    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:38.271073    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:38.289085    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:38.289102    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:38.314299    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:38.314308    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:38.329354    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:38.329367    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:38.341640    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:38.341653    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:38.360150    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:38.360162    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:38.375378    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:38.375394    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:38.388176    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:38.388187    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:40.894052    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:45.896685    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:45.897004    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:45.933332    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:45.933463    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:45.951676    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:45.951768    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:45.966192    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:45.966245    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:45.980341    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:45.980407    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:46.011129    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:46.011229    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:46.030985    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:46.031024    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:46.045829    9084 logs.go:276] 0 containers: []
	W0408 10:56:46.045857    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:46.045923    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:46.057984    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:46.058004    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:46.058009    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:46.101244    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:46.101260    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:46.116134    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:46.116149    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:46.129440    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:46.129455    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:46.166849    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:46.166859    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:46.181675    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:46.181689    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:46.200402    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:46.200416    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:46.220122    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:46.220133    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:46.244843    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:46.244869    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:46.283943    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:46.283961    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:46.288488    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:46.288497    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:46.303859    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:46.303869    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:46.323292    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:46.323303    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:46.336944    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:46.336957    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:46.350906    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:46.350916    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:46.364590    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:46.364603    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:48.878136    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:56:53.880470    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:56:53.880692    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:56:53.895993    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:56:53.896070    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:56:53.909353    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:56:53.909424    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:56:53.921651    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:56:53.921725    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:56:53.933346    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:56:53.933415    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:56:53.944782    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:56:53.944860    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:56:53.962984    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:56:53.963057    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:56:53.973921    9084 logs.go:276] 0 containers: []
	W0408 10:56:53.973932    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:56:53.973989    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:56:53.987288    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:56:53.987308    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:56:53.987314    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:56:54.003279    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:56:54.003287    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:56:54.015641    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:56:54.015653    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:56:54.034525    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:56:54.034535    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:56:54.074567    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:56:54.074578    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:56:54.090090    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:56:54.090109    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:56:54.102930    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:56:54.102939    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:56:54.127186    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:56:54.127200    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:56:54.139523    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:56:54.139538    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:56:54.152265    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:56:54.152278    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:56:54.156990    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:56:54.156999    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:56:54.196206    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:56:54.196219    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:56:54.211921    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:56:54.211932    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:56:54.233614    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:56:54.233625    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:56:54.245191    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:56:54.245203    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:56:54.284704    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:56:54.284714    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:56:56.801045    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:01.803181    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:01.803261    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:01.815142    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:57:01.815222    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:01.826493    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:57:01.826564    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:01.838224    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:57:01.838300    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:01.849650    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:57:01.849728    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:01.861640    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:57:01.861703    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:01.872488    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:57:01.872565    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:01.883623    9084 logs.go:276] 0 containers: []
	W0408 10:57:01.883635    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:01.883709    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:01.894388    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:57:01.894407    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:57:01.894415    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:57:01.912846    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:01.912857    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:01.937524    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:01.937536    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:01.941989    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:01.941996    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:01.978238    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:57:01.978250    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:57:02.018336    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:02.018353    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:02.057495    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:57:02.057506    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:57:02.072573    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:57:02.072582    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:57:02.085119    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:57:02.085132    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:57:02.100237    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:57:02.100251    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:02.112893    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:57:02.112907    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:57:02.129885    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:57:02.129895    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:57:02.144908    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:57:02.144919    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:57:02.166903    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:57:02.166914    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:57:02.182499    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:57:02.182513    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:57:02.194013    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:57:02.194024    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:57:04.706982    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:09.709127    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:09.709200    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:09.721366    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:57:09.721441    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:09.732827    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:57:09.732888    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:09.743736    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:57:09.743800    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:09.755336    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:57:09.755412    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:09.771318    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:57:09.771390    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:09.782823    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:57:09.782899    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:09.793637    9084 logs.go:276] 0 containers: []
	W0408 10:57:09.793651    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:09.793714    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:09.805234    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:57:09.805249    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:57:09.805254    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:57:09.818574    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:09.818596    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:09.857319    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:57:09.857336    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:57:09.896107    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:57:09.896118    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:57:09.912136    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:57:09.912144    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:57:09.927563    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:57:09.927575    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:57:09.940076    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:57:09.940089    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:09.952570    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:09.952583    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:09.999392    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:57:09.999402    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:57:10.014816    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:57:10.014834    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:57:10.029961    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:57:10.029973    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:57:10.042560    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:57:10.042573    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:57:10.064282    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:57:10.064296    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:57:10.075829    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:10.075843    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:10.097502    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:10.097511    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:10.101669    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:57:10.101675    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:57:12.620802    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:17.622339    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:17.622383    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:57:17.635147    9084 logs.go:276] 2 containers: [dc533809f89d 57d2272b22f0]
	I0408 10:57:17.635196    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:57:17.646354    9084 logs.go:276] 2 containers: [de47585b049b d3d7a66c7373]
	I0408 10:57:17.646420    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:57:17.659347    9084 logs.go:276] 1 containers: [b6635db9ea28]
	I0408 10:57:17.659421    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:57:17.670493    9084 logs.go:276] 2 containers: [c18ce61f6afc 45e06afd7b3e]
	I0408 10:57:17.670569    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:57:17.681578    9084 logs.go:276] 1 containers: [d4723aa54531]
	I0408 10:57:17.681649    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:57:17.698580    9084 logs.go:276] 2 containers: [d7954562a916 3fcb068b7c04]
	I0408 10:57:17.698655    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:57:17.709351    9084 logs.go:276] 0 containers: []
	W0408 10:57:17.709362    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:57:17.709420    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:57:17.720736    9084 logs.go:276] 1 containers: [18d66ae57f30]
	I0408 10:57:17.720754    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:57:17.720760    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:57:17.725365    9084 logs.go:123] Gathering logs for kube-scheduler [45e06afd7b3e] ...
	I0408 10:57:17.725375    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e06afd7b3e"
	I0408 10:57:17.740762    9084 logs.go:123] Gathering logs for kube-proxy [d4723aa54531] ...
	I0408 10:57:17.740773    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4723aa54531"
	I0408 10:57:17.753424    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:57:17.753435    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:57:17.776656    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:57:17.776665    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:57:17.813764    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:57:17.813777    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:57:17.828699    9084 logs.go:123] Gathering logs for kube-apiserver [dc533809f89d] ...
	I0408 10:57:17.828709    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc533809f89d"
	I0408 10:57:17.865258    9084 logs.go:123] Gathering logs for kube-apiserver [57d2272b22f0] ...
	I0408 10:57:17.865276    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57d2272b22f0"
	I0408 10:57:17.912061    9084 logs.go:123] Gathering logs for storage-provisioner [18d66ae57f30] ...
	I0408 10:57:17.912075    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18d66ae57f30"
	I0408 10:57:17.924472    9084 logs.go:123] Gathering logs for kube-controller-manager [3fcb068b7c04] ...
	I0408 10:57:17.924484    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fcb068b7c04"
	I0408 10:57:17.938016    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:57:17.938026    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:57:17.974861    9084 logs.go:123] Gathering logs for etcd [de47585b049b] ...
	I0408 10:57:17.974869    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de47585b049b"
	I0408 10:57:17.988309    9084 logs.go:123] Gathering logs for etcd [d3d7a66c7373] ...
	I0408 10:57:17.988319    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3d7a66c7373"
	I0408 10:57:18.002652    9084 logs.go:123] Gathering logs for coredns [b6635db9ea28] ...
	I0408 10:57:18.002664    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6635db9ea28"
	I0408 10:57:18.014171    9084 logs.go:123] Gathering logs for kube-scheduler [c18ce61f6afc] ...
	I0408 10:57:18.014185    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c18ce61f6afc"
	I0408 10:57:18.025845    9084 logs.go:123] Gathering logs for kube-controller-manager [d7954562a916] ...
	I0408 10:57:18.025855    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7954562a916"
	I0408 10:57:20.545926    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:25.548333    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:25.548363    9084 kubeadm.go:591] duration metric: took 4m3.905114541s to restartPrimaryControlPlane
	W0408 10:57:25.548404    9084 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 10:57:25.548417    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0408 10:57:26.580495    9084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.032062125s)
	I0408 10:57:26.580575    9084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 10:57:26.585606    9084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 10:57:26.588389    9084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 10:57:26.591016    9084 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 10:57:26.591022    9084 kubeadm.go:156] found existing configuration files:
	
	I0408 10:57:26.591047    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf
	I0408 10:57:26.593345    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 10:57:26.593368    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 10:57:26.596312    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf
	I0408 10:57:26.599171    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 10:57:26.599191    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 10:57:26.601645    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf
	I0408 10:57:26.604662    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 10:57:26.604684    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 10:57:26.607817    9084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf
	I0408 10:57:26.610495    9084 kubeadm.go:162] "https://control-plane.minikube.internal:51476" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51476 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 10:57:26.610517    9084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 10:57:26.613212    9084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 10:57:26.629729    9084 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0408 10:57:26.629765    9084 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 10:57:26.679877    9084 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 10:57:26.679933    9084 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 10:57:26.679992    9084 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 10:57:26.728518    9084 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 10:57:26.732731    9084 out.go:204]   - Generating certificates and keys ...
	I0408 10:57:26.732864    9084 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 10:57:26.732965    9084 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 10:57:26.733055    9084 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 10:57:26.733090    9084 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 10:57:26.733126    9084 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 10:57:26.733151    9084 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 10:57:26.733184    9084 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 10:57:26.733285    9084 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 10:57:26.733383    9084 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 10:57:26.733468    9084 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 10:57:26.733489    9084 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 10:57:26.733519    9084 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 10:57:26.805521    9084 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 10:57:26.948630    9084 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 10:57:27.196024    9084 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 10:57:27.263798    9084 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 10:57:27.292234    9084 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 10:57:27.292678    9084 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 10:57:27.292699    9084 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 10:57:27.386598    9084 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 10:57:27.390942    9084 out.go:204]   - Booting up control plane ...
	I0408 10:57:27.390988    9084 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 10:57:27.391024    9084 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 10:57:27.391057    9084 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 10:57:27.391119    9084 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 10:57:27.391208    9084 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 10:57:31.891771    9084 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501223 seconds
	I0408 10:57:31.891834    9084 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 10:57:31.895564    9084 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 10:57:32.412543    9084 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 10:57:32.412834    9084 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-476000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 10:57:32.916321    9084 kubeadm.go:309] [bootstrap-token] Using token: 9bum99.wbtrb7jvnhsflftl
	I0408 10:57:32.922648    9084 out.go:204]   - Configuring RBAC rules ...
	I0408 10:57:32.922701    9084 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 10:57:32.922761    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 10:57:32.926348    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 10:57:32.927084    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 10:57:32.928025    9084 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 10:57:32.928940    9084 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 10:57:32.932376    9084 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 10:57:33.108672    9084 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 10:57:33.321122    9084 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 10:57:33.321547    9084 kubeadm.go:309] 
	I0408 10:57:33.321639    9084 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 10:57:33.321653    9084 kubeadm.go:309] 
	I0408 10:57:33.321764    9084 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 10:57:33.321774    9084 kubeadm.go:309] 
	I0408 10:57:33.321827    9084 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 10:57:33.321867    9084 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 10:57:33.321895    9084 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 10:57:33.321901    9084 kubeadm.go:309] 
	I0408 10:57:33.321934    9084 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 10:57:33.321937    9084 kubeadm.go:309] 
	I0408 10:57:33.321968    9084 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 10:57:33.321970    9084 kubeadm.go:309] 
	I0408 10:57:33.322067    9084 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 10:57:33.322169    9084 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 10:57:33.322326    9084 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 10:57:33.322333    9084 kubeadm.go:309] 
	I0408 10:57:33.322381    9084 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 10:57:33.322424    9084 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 10:57:33.322428    9084 kubeadm.go:309] 
	I0408 10:57:33.322466    9084 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9bum99.wbtrb7jvnhsflftl \
	I0408 10:57:33.322546    9084 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 \
	I0408 10:57:33.322609    9084 kubeadm.go:309] 	--control-plane 
	I0408 10:57:33.322652    9084 kubeadm.go:309] 
	I0408 10:57:33.322700    9084 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 10:57:33.322703    9084 kubeadm.go:309] 
	I0408 10:57:33.322775    9084 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9bum99.wbtrb7jvnhsflftl \
	I0408 10:57:33.322854    9084 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d8c71b19ee4cc7e59001527410e809db8edc3975b4a5e7aa364a2679c02ff296 
	I0408 10:57:33.322950    9084 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 10:57:33.322963    9084 cni.go:84] Creating CNI manager for ""
	I0408 10:57:33.322978    9084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:57:33.326734    9084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 10:57:33.334690    9084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 10:57:33.337900    9084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 10:57:33.343261    9084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 10:57:33.343347    9084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 10:57:33.343365    9084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-476000 minikube.k8s.io/updated_at=2024_04_08T10_57_33_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=stopped-upgrade-476000 minikube.k8s.io/primary=true
	I0408 10:57:33.347015    9084 ops.go:34] apiserver oom_adj: -16
	I0408 10:57:33.398173    9084 kubeadm.go:1107] duration metric: took 54.904541ms to wait for elevateKubeSystemPrivileges
	W0408 10:57:33.398203    9084 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 10:57:33.398207    9084 kubeadm.go:393] duration metric: took 4m11.768473416s to StartCluster
	I0408 10:57:33.398217    9084 settings.go:142] acquiring lock: {Name:mk6ed0f877152c89dfeb4cfbed60423b324ecbe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:57:33.398307    9084 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:57:33.398730    9084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/kubeconfig: {Name:mk0efe9672745867be1d2d584884b1976098d9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:57:33.398988    9084 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:57:33.405678    9084 out.go:177] * Verifying Kubernetes components...
	I0408 10:57:33.399006    9084 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 10:57:33.399281    9084 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:57:33.413777    9084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 10:57:33.413790    9084 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-476000"
	I0408 10:57:33.413792    9084 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-476000"
	I0408 10:57:33.413832    9084 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-476000"
	W0408 10:57:33.413838    9084 addons.go:243] addon storage-provisioner should already be in state true
	I0408 10:57:33.413858    9084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-476000"
	I0408 10:57:33.413869    9084 host.go:66] Checking if "stopped-upgrade-476000" exists ...
	I0408 10:57:33.414337    9084 retry.go:31] will retry after 1.139752945s: connect: dial unix /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/monitor: connect: connection refused
	I0408 10:57:33.415557    9084 kapi.go:59] client config for stopped-upgrade-476000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/stopped-upgrade-476000/client.key", CAFile:"/Users/jenkins/minikube-integration/18585-6624/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1042e3a70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 10:57:33.415684    9084 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-476000"
	W0408 10:57:33.415690    9084 addons.go:243] addon default-storageclass should already be in state true
	I0408 10:57:33.415701    9084 host.go:66] Checking if "stopped-upgrade-476000" exists ...
	I0408 10:57:33.416661    9084 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 10:57:33.416667    9084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 10:57:33.416672    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:57:33.505911    9084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 10:57:33.512168    9084 api_server.go:52] waiting for apiserver process to appear ...
	I0408 10:57:33.512222    9084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 10:57:33.516643    9084 api_server.go:72] duration metric: took 117.63975ms to wait for apiserver process to appear ...
	I0408 10:57:33.516653    9084 api_server.go:88] waiting for apiserver healthz status ...
	I0408 10:57:33.516662    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:33.554808    9084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 10:57:34.560968    9084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 10:57:34.564240    9084 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:57:34.564251    9084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 10:57:34.564263    9084 sshutil.go:53] new ssh client: &{IP:localhost Port:51442 SSHKeyPath:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/stopped-upgrade-476000/id_rsa Username:docker}
	I0408 10:57:34.599495    9084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 10:57:38.518836    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:38.518865    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:43.519371    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:43.519403    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:48.519837    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:48.519885    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:53.520417    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:53.520441    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:57:58.521086    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:57:58.521112    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:03.521905    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:03.521932    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0408 10:58:03.895863    9084 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0408 10:58:03.904271    9084 out.go:177] * Enabled addons: storage-provisioner
	I0408 10:58:03.913210    9084 addons.go:505] duration metric: took 30.514006417s for enable addons: enabled=[storage-provisioner]
	I0408 10:58:08.523025    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:08.523120    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:13.523793    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:13.523813    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:18.525451    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:18.525511    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:23.527591    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:23.527643    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:28.530052    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:28.530113    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:33.532590    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:33.532965    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:58:33.579321    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:58:33.579433    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:58:33.608232    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:58:33.608310    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:58:33.619123    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:58:33.619194    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:58:33.629790    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:58:33.629870    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:58:33.639828    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:58:33.639894    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:58:33.650091    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:58:33.650159    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:58:33.660352    9084 logs.go:276] 0 containers: []
	W0408 10:58:33.660362    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:58:33.660419    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:58:33.670456    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:58:33.670469    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:58:33.670474    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:58:33.710543    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:58:33.710557    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:58:33.725741    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:58:33.725754    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:58:33.739470    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:58:33.739479    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:58:33.750799    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:58:33.750814    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:58:33.774279    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:58:33.774285    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:58:33.807583    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:58:33.807590    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:58:33.811676    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:58:33.811682    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:58:33.823401    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:58:33.823415    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:58:33.841436    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:58:33.841447    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:58:33.858275    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:58:33.858286    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:58:33.871755    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:58:33.871767    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:58:33.883274    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:58:33.883286    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:58:36.400347    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:41.401622    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:41.401702    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:58:41.413874    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:58:41.413923    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:58:41.424184    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:58:41.424247    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:58:41.435066    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:58:41.435127    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:58:41.445353    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:58:41.445420    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:58:41.456225    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:58:41.456284    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:58:41.468810    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:58:41.468963    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:58:41.481348    9084 logs.go:276] 0 containers: []
	W0408 10:58:41.481358    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:58:41.481393    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:58:41.492387    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:58:41.492404    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:58:41.492410    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:58:41.508179    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:58:41.508188    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:58:41.526127    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:58:41.526135    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:58:41.561355    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:58:41.561369    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:58:41.567319    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:58:41.567331    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:58:41.605596    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:58:41.605606    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:58:41.620033    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:58:41.620041    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:58:41.631464    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:58:41.631475    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:58:41.655475    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:58:41.655488    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:58:41.667066    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:58:41.667076    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:58:41.679507    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:58:41.679516    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:58:41.693561    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:58:41.693572    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:58:41.709995    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:58:41.710011    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:58:44.228222    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:49.230587    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:49.230672    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:58:49.241665    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:58:49.241744    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:58:49.256252    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:58:49.256321    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:58:49.267052    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:58:49.267126    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:58:49.277620    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:58:49.277681    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:58:49.288157    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:58:49.288221    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:58:49.298848    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:58:49.298903    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:58:49.309911    9084 logs.go:276] 0 containers: []
	W0408 10:58:49.309926    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:58:49.309982    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:58:49.321262    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:58:49.321279    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:58:49.321285    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:58:49.337159    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:58:49.337169    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:58:49.352872    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:58:49.352881    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:58:49.370058    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:58:49.370069    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:58:49.409138    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:58:49.409149    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:58:49.423055    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:58:49.423064    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:58:49.436990    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:58:49.437002    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:58:49.452318    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:58:49.452329    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:58:49.464120    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:58:49.464134    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:58:49.488251    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:58:49.488258    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:58:49.499254    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:58:49.499263    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:58:49.534841    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:58:49.534850    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:58:49.539443    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:58:49.539451    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:58:52.054337    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:58:57.056279    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:58:57.056471    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:58:57.078746    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:58:57.078851    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:58:57.095054    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:58:57.095132    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:58:57.107775    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:58:57.107840    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:58:57.118646    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:58:57.118715    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:58:57.128968    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:58:57.129034    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:58:57.142015    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:58:57.142084    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:58:57.153497    9084 logs.go:276] 0 containers: []
	W0408 10:58:57.153506    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:58:57.153557    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:58:57.163428    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:58:57.163446    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:58:57.163451    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:58:57.168003    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:58:57.168013    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:58:57.205269    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:58:57.205277    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:58:57.223544    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:58:57.223557    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:58:57.238952    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:58:57.238963    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:58:57.256497    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:58:57.256509    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:58:57.267883    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:58:57.267894    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:58:57.292392    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:58:57.292399    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:58:57.327080    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:58:57.327089    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:58:57.339139    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:58:57.339152    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:58:57.350679    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:58:57.350692    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:58:57.362355    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:58:57.362364    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:58:57.374678    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:58:57.374689    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:58:59.888844    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:04.891692    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:04.892156    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:04.933298    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:04.933436    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:04.953546    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:04.953642    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:04.967706    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:59:04.967774    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:04.979932    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:04.979993    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:04.990529    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:04.990591    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:05.001368    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:05.001437    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:05.011156    9084 logs.go:276] 0 containers: []
	W0408 10:59:05.011169    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:05.011222    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:05.021805    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:05.021820    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:05.021826    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:05.055962    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:05.055969    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:05.060160    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:05.060168    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:05.071909    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:05.071921    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:05.083724    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:05.083737    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:05.108114    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:05.108123    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:05.119732    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:05.119741    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:05.137878    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:05.137889    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:05.172874    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:05.172886    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:05.188161    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:05.188174    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:05.203170    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:05.203182    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:05.220013    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:05.220027    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:05.234757    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:05.234768    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:07.748737    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:12.751365    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:12.751877    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:12.792854    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:12.792990    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:12.814608    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:12.814728    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:12.829994    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:59:12.830072    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:12.842205    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:12.842281    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:12.852772    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:12.852841    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:12.863312    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:12.863370    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:12.873634    9084 logs.go:276] 0 containers: []
	W0408 10:59:12.873644    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:12.873704    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:12.884172    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:12.884189    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:12.884194    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:12.899219    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:12.899228    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:12.916565    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:12.916576    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:12.929115    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:12.929127    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:12.963216    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:12.963223    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:12.967751    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:12.967761    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:13.002667    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:13.002677    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:13.017515    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:13.017527    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:13.034841    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:13.034853    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:13.048747    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:13.048757    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:13.060905    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:13.060916    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:13.072107    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:13.072119    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:13.086792    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:13.086803    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:15.613104    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:20.615851    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:20.616244    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:20.652920    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:20.653065    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:20.674474    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:20.674582    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:20.690322    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:59:20.690395    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:20.709607    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:20.709676    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:20.720098    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:20.720170    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:20.730760    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:20.730831    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:20.741029    9084 logs.go:276] 0 containers: []
	W0408 10:59:20.741040    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:20.741102    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:20.751652    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:20.751667    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:20.751671    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:20.765648    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:20.765662    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:20.777421    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:20.777436    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:20.796840    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:20.796852    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:20.816560    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:20.816574    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:20.828196    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:20.828209    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:20.861801    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:20.861808    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:20.865688    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:20.865696    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:20.906811    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:20.906825    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:20.930507    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:20.930516    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:20.942338    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:20.942349    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:20.957840    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:20.957848    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:20.970649    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:20.970658    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:23.488793    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:28.491198    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:28.491568    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:28.526857    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:28.526982    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:28.550370    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:28.550467    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:28.564993    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:59:28.565077    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:28.576984    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:28.577052    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:28.587810    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:28.587883    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:28.598119    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:28.598187    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:28.608837    9084 logs.go:276] 0 containers: []
	W0408 10:59:28.608850    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:28.608910    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:28.619792    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:28.619807    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:28.619812    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:28.653493    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:28.653503    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:28.687439    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:28.687452    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:28.702523    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:28.702531    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:28.716236    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:28.716252    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:28.728129    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:28.728141    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:28.743230    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:28.743239    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:28.767418    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:28.767425    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:28.771907    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:28.771915    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:28.783518    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:28.783530    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:28.795035    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:28.795048    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:28.813886    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:28.813896    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:28.825637    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:28.825649    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:31.339082    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:36.341442    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:36.341663    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:36.366549    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:36.366631    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:36.380692    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:36.380763    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:36.392291    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:59:36.392353    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:36.402678    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:36.402739    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:36.417084    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:36.417162    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:36.427822    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:36.427882    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:36.437251    9084 logs.go:276] 0 containers: []
	W0408 10:59:36.437262    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:36.437323    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:36.447619    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:36.447636    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:36.447642    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:36.459125    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:36.459139    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:36.476963    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:36.476981    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:36.489257    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:36.489271    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:36.506938    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:36.506951    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:36.530666    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:36.530673    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:36.542148    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:36.542158    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:36.559485    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:36.559495    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:36.571717    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:36.571730    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:36.606423    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:36.606437    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:36.619981    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:36.619993    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:36.631137    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:36.631150    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:36.666361    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:36.666368    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:39.171995    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:44.174799    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:44.175257    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:44.217067    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:44.217228    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:44.240030    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:44.240138    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:44.255832    9084 logs.go:276] 2 containers: [87fb975a17ef a9842b72d70e]
	I0408 10:59:44.255909    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:44.267762    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:44.267826    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:44.278974    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:44.279047    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:44.290477    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:44.290537    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:44.301407    9084 logs.go:276] 0 containers: []
	W0408 10:59:44.301419    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:44.301480    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:44.311502    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:44.311518    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:44.311524    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:44.345084    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:44.345096    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:44.359133    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:44.359142    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:44.373041    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:44.373052    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:44.384344    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:44.384357    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:44.395970    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:44.395982    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:44.407220    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:44.407232    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:44.426583    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:44.426598    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:44.467104    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:44.467122    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:44.503409    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:44.503425    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:44.532132    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:44.532154    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:44.566722    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:44.566733    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:44.589175    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:44.589188    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:47.095948    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 10:59:52.098105    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 10:59:52.098596    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 10:59:52.138386    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 10:59:52.138516    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 10:59:52.159810    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 10:59:52.159893    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 10:59:52.180227    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 10:59:52.180301    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 10:59:52.199274    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 10:59:52.199346    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 10:59:52.210011    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 10:59:52.210100    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 10:59:52.223880    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 10:59:52.223941    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 10:59:52.235303    9084 logs.go:276] 0 containers: []
	W0408 10:59:52.235316    9084 logs.go:278] No container was found matching "kindnet"
	I0408 10:59:52.235387    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 10:59:52.246398    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 10:59:52.246417    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 10:59:52.246422    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 10:59:52.260478    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 10:59:52.260491    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 10:59:52.275347    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 10:59:52.275358    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 10:59:52.292706    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 10:59:52.292721    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 10:59:52.297164    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 10:59:52.297175    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 10:59:52.309170    9084 logs.go:123] Gathering logs for Docker ...
	I0408 10:59:52.309181    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 10:59:52.333795    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 10:59:52.333813    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 10:59:52.347441    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 10:59:52.347453    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 10:59:52.359672    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 10:59:52.359683    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 10:59:52.377649    9084 logs.go:123] Gathering logs for container status ...
	I0408 10:59:52.377662    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 10:59:52.390630    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 10:59:52.390641    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 10:59:52.427078    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 10:59:52.427094    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 10:59:52.465352    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 10:59:52.465364    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 10:59:52.480972    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 10:59:52.480984    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 10:59:52.499442    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 10:59:52.499455    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 10:59:55.014506    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:00.015560    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:00.016083    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:00.052161    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:00.052255    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:00.067103    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:00.067209    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:00.080496    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:00.080561    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:00.093465    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:00.093546    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:00.105474    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:00.105543    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:00.117504    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:00.117571    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:00.127682    9084 logs.go:276] 0 containers: []
	W0408 11:00:00.127696    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:00.127753    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:00.143579    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:00.143596    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:00.143601    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:00.178784    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:00.178795    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:00.193493    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:00.193507    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:00.216893    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:00.216902    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:00.229527    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:00.229537    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:00.240599    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:00.240611    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:00.275710    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:00.275722    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:00.291184    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:00.291196    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:00.302840    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:00.302853    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:00.313954    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:00.313968    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:00.327958    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:00.327969    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:00.345875    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:00.345887    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:00.356892    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:00.356905    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:00.360990    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:00.360995    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:00.372827    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:00.372838    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:02.887063    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:07.889921    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:07.890371    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:07.930507    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:07.930648    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:07.952728    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:07.952853    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:07.967813    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:07.967889    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:07.980220    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:07.980287    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:07.993782    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:07.993850    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:08.004030    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:08.004093    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:08.014632    9084 logs.go:276] 0 containers: []
	W0408 11:00:08.014643    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:08.014704    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:08.024983    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:08.025000    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:08.025005    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:08.036396    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:08.036407    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:08.050844    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:08.050853    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:08.062141    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:08.062150    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:08.074422    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:08.074433    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:08.089972    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:08.089985    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:08.101250    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:08.101263    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:08.118870    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:08.118883    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:08.152665    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:08.152679    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:08.167071    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:08.167081    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:08.178638    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:08.178649    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:08.183239    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:08.183248    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:08.197875    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:08.197885    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:08.231850    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:08.231863    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:08.246555    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:08.246568    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:10.771114    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:15.773553    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:15.773923    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:15.810255    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:15.810383    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:15.832108    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:15.832209    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:15.851070    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:15.851139    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:15.868929    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:15.868999    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:15.879477    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:15.879554    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:15.889737    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:15.889806    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:15.899424    9084 logs.go:276] 0 containers: []
	W0408 11:00:15.899435    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:15.899484    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:15.909917    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:15.909935    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:15.909940    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:15.921772    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:15.921786    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:15.940597    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:15.940607    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:15.952251    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:15.952263    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:15.956943    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:15.956952    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:15.990546    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:15.990558    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:16.005272    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:16.005283    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:16.024118    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:16.024128    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:16.035777    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:16.035790    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:16.047819    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:16.047831    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:16.081837    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:16.081844    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:16.093380    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:16.093390    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:16.108325    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:16.108335    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:16.120300    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:16.120313    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:16.131619    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:16.131629    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:18.658758    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:23.660962    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:23.661224    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:23.688169    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:23.688286    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:23.705909    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:23.705992    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:23.719718    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:23.719799    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:23.733229    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:23.733301    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:23.744038    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:23.744108    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:23.754924    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:23.754986    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:23.765092    9084 logs.go:276] 0 containers: []
	W0408 11:00:23.765102    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:23.765158    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:23.775897    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:23.775917    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:23.775922    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:23.790469    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:23.790483    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:23.802155    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:23.802169    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:23.816498    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:23.816511    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:23.835299    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:23.835310    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:23.847882    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:23.847895    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:23.859513    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:23.859527    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:23.871500    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:23.871512    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:23.883051    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:23.883064    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:23.894403    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:23.894417    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:23.928403    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:23.928411    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:23.932848    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:23.932855    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:23.968788    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:23.968797    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:23.986957    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:23.986968    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:24.008041    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:24.008053    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:26.533770    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:31.536059    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:31.536147    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:31.548591    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:31.548666    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:31.560822    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:31.560898    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:31.573631    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:31.573709    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:31.585739    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:31.585808    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:31.597620    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:31.597690    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:31.610163    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:31.610237    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:31.622007    9084 logs.go:276] 0 containers: []
	W0408 11:00:31.622021    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:31.622081    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:31.634417    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:31.634437    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:31.634442    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:31.650246    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:31.650265    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:31.654959    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:31.654968    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:31.666956    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:31.666966    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:31.680550    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:31.680561    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:31.714582    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:31.714596    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:31.731263    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:31.731273    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:31.746931    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:31.746941    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:31.762500    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:31.762510    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:31.782273    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:31.782283    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:31.822421    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:31.822433    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:31.856608    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:31.856617    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:31.870980    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:31.870989    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:31.882466    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:31.882476    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:31.906419    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:31.906429    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:34.420355    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:39.423146    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:39.423503    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:39.453947    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:39.454064    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:39.473329    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:39.473442    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:39.487309    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:39.487388    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:39.499513    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:39.499583    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:39.509908    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:39.509979    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:39.520504    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:39.520563    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:39.530881    9084 logs.go:276] 0 containers: []
	W0408 11:00:39.530893    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:39.530954    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:39.541357    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:39.541375    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:39.541381    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:39.575098    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:39.575109    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:39.597657    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:39.597667    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:39.615284    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:39.615293    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:39.627119    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:39.627131    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:39.641693    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:39.641704    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:39.654211    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:39.654223    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:39.665974    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:39.665984    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:39.699360    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:39.699369    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:39.703256    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:39.703263    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:39.724169    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:39.724180    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:39.738398    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:39.738407    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:39.749906    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:39.749919    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:39.766994    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:39.767005    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:39.778671    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:39.778684    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:42.303498    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:47.305701    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:47.305949    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:47.330467    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:47.330593    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:47.347981    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:47.348064    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:47.361793    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:47.361862    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:47.373033    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:47.373095    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:47.390875    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:47.390946    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:47.401374    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:47.401436    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:47.431800    9084 logs.go:276] 0 containers: []
	W0408 11:00:47.431814    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:47.431865    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:47.442279    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:47.442298    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:47.442303    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:47.453425    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:47.453436    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:47.465317    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:47.465329    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:47.469580    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:47.469587    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:47.483104    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:47.483117    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:47.494223    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:47.494233    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:47.505737    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:47.505745    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:47.528880    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:47.528888    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:47.562509    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:47.562520    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:47.580754    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:47.580764    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:47.592482    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:47.596095    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:47.611497    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:47.611505    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:47.623937    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:47.623948    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:47.641124    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:47.641134    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:47.674579    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:47.674587    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:50.188201    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:00:55.189812    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:00:55.189902    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:00:55.202602    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:00:55.202668    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:00:55.214738    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:00:55.214789    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:00:55.227075    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:00:55.227139    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:00:55.238299    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:00:55.238363    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:00:55.249702    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:00:55.249789    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:00:55.261071    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:00:55.261119    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:00:55.271484    9084 logs.go:276] 0 containers: []
	W0408 11:00:55.271495    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:00:55.271550    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:00:55.283098    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:00:55.283114    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:00:55.283119    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:00:55.300755    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:00:55.300766    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:00:55.319904    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:00:55.319916    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:00:55.332875    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:00:55.332885    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:00:55.357623    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:00:55.357643    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:00:55.394207    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:00:55.394218    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:00:55.429840    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:00:55.429852    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:00:55.445356    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:00:55.445367    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:00:55.458754    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:00:55.458766    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:00:55.471775    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:00:55.471784    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:00:55.483414    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:00:55.483424    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:00:55.495113    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:00:55.495121    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:00:55.511559    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:00:55.511568    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:00:55.516125    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:00:55.516137    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:00:55.530930    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:00:55.530941    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:00:58.044964    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:01:03.047520    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:01:03.047644    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:01:03.060506    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:01:03.060583    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:01:03.078956    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:01:03.079005    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:01:03.089806    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:01:03.089877    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:01:03.100660    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:01:03.100725    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:01:03.111474    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:01:03.111540    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:01:03.122208    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:01:03.122277    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:01:03.132716    9084 logs.go:276] 0 containers: []
	W0408 11:01:03.132727    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:01:03.132787    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:01:03.143772    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:01:03.143787    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:01:03.143792    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:01:03.148216    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:01:03.148223    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:01:03.185297    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:01:03.185307    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:01:03.198787    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:01:03.198798    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:01:03.210314    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:01:03.210325    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:01:03.224291    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:01:03.224302    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:01:03.257957    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:01:03.257965    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:01:03.272523    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:01:03.272533    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:01:03.286782    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:01:03.286792    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:01:03.298628    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:01:03.298638    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:01:03.321636    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:01:03.321645    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:01:03.339643    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:01:03.339654    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:01:03.351954    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:01:03.351964    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:01:03.363251    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:01:03.363262    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:01:03.374543    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:01:03.374553    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:01:05.889431    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:01:10.889816    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:01:10.890288    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:01:10.930409    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:01:10.930540    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:01:10.951166    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:01:10.951277    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:01:10.971090    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:01:10.971169    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:01:10.982963    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:01:10.983033    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:01:10.993163    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:01:10.993219    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:01:11.003708    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:01:11.003781    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:01:11.014363    9084 logs.go:276] 0 containers: []
	W0408 11:01:11.014374    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:01:11.014429    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:01:11.024586    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:01:11.024605    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:01:11.024610    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:01:11.029281    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:01:11.029288    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:01:11.040599    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:01:11.040609    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:01:11.056959    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:01:11.056968    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:01:11.074236    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:01:11.074246    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:01:11.085690    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:01:11.085720    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:01:11.120409    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:01:11.120421    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:01:11.134812    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:01:11.134820    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:01:11.146458    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:01:11.146471    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:01:11.161249    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:01:11.161262    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:01:11.172606    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:01:11.172619    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:01:11.197320    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:01:11.197330    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:01:11.231719    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:01:11.231729    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:01:11.245963    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:01:11.245975    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:01:11.257285    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:01:11.257297    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:01:13.771245    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:01:18.773540    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:01:18.773635    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:01:18.786037    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:01:18.786096    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:01:18.807790    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:01:18.807861    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:01:18.820441    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:01:18.820500    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:01:18.831751    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:01:18.831801    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:01:18.842857    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:01:18.842911    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:01:18.854959    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:01:18.855024    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:01:18.868152    9084 logs.go:276] 0 containers: []
	W0408 11:01:18.868163    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:01:18.868206    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:01:18.878848    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:01:18.878863    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:01:18.878868    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:01:18.893440    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:01:18.893451    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:01:18.911732    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:01:18.911748    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:01:18.924220    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:01:18.924229    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:01:18.936171    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:01:18.936183    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:01:18.952305    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:01:18.952316    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:01:18.989722    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:01:18.989735    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:01:19.003185    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:01:19.003201    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:01:19.029431    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:01:19.029446    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:01:19.033978    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:01:19.033988    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:01:19.070332    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:01:19.070343    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:01:19.084933    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:01:19.084945    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:01:19.098051    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:01:19.098065    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:01:19.110330    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:01:19.110340    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:01:19.123325    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:01:19.123338    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:01:21.644549    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:01:26.647041    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:01:26.647484    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0408 11:01:26.684507    9084 logs.go:276] 1 containers: [7bdd7e89bb5e]
	I0408 11:01:26.684646    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0408 11:01:26.706425    9084 logs.go:276] 1 containers: [2d7aa09ccb37]
	I0408 11:01:26.706540    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0408 11:01:26.721596    9084 logs.go:276] 4 containers: [dca8cb12216d 75ee8da793c2 87fb975a17ef a9842b72d70e]
	I0408 11:01:26.721687    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0408 11:01:26.734390    9084 logs.go:276] 1 containers: [e6a5019e4ae9]
	I0408 11:01:26.734459    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0408 11:01:26.745137    9084 logs.go:276] 1 containers: [a77492b7bcd1]
	I0408 11:01:26.745205    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0408 11:01:26.755821    9084 logs.go:276] 1 containers: [61ce507096f5]
	I0408 11:01:26.755890    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0408 11:01:26.765608    9084 logs.go:276] 0 containers: []
	W0408 11:01:26.765624    9084 logs.go:278] No container was found matching "kindnet"
	I0408 11:01:26.765688    9084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0408 11:01:26.776144    9084 logs.go:276] 1 containers: [681c9a8ad25a]
	I0408 11:01:26.776163    9084 logs.go:123] Gathering logs for etcd [2d7aa09ccb37] ...
	I0408 11:01:26.776169    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7aa09ccb37"
	I0408 11:01:26.795101    9084 logs.go:123] Gathering logs for storage-provisioner [681c9a8ad25a] ...
	I0408 11:01:26.795114    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 681c9a8ad25a"
	I0408 11:01:26.806801    9084 logs.go:123] Gathering logs for container status ...
	I0408 11:01:26.806811    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 11:01:26.818517    9084 logs.go:123] Gathering logs for kube-apiserver [7bdd7e89bb5e] ...
	I0408 11:01:26.818530    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bdd7e89bb5e"
	I0408 11:01:26.837310    9084 logs.go:123] Gathering logs for coredns [87fb975a17ef] ...
	I0408 11:01:26.837321    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87fb975a17ef"
	I0408 11:01:26.849660    9084 logs.go:123] Gathering logs for coredns [a9842b72d70e] ...
	I0408 11:01:26.849672    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9842b72d70e"
	I0408 11:01:26.862063    9084 logs.go:123] Gathering logs for Docker ...
	I0408 11:01:26.862074    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0408 11:01:26.884766    9084 logs.go:123] Gathering logs for dmesg ...
	I0408 11:01:26.884776    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 11:01:26.889286    9084 logs.go:123] Gathering logs for kube-proxy [a77492b7bcd1] ...
	I0408 11:01:26.889292    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a77492b7bcd1"
	I0408 11:01:26.907867    9084 logs.go:123] Gathering logs for kube-scheduler [e6a5019e4ae9] ...
	I0408 11:01:26.907880    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5019e4ae9"
	I0408 11:01:26.922216    9084 logs.go:123] Gathering logs for kube-controller-manager [61ce507096f5] ...
	I0408 11:01:26.922226    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ce507096f5"
	I0408 11:01:26.938932    9084 logs.go:123] Gathering logs for kubelet ...
	I0408 11:01:26.938941    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 11:01:26.971909    9084 logs.go:123] Gathering logs for describe nodes ...
	I0408 11:01:26.971918    9084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 11:01:27.006626    9084 logs.go:123] Gathering logs for coredns [dca8cb12216d] ...
	I0408 11:01:27.006636    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dca8cb12216d"
	I0408 11:01:27.018243    9084 logs.go:123] Gathering logs for coredns [75ee8da793c2] ...
	I0408 11:01:27.018256    9084 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ee8da793c2"
	I0408 11:01:29.532354    9084 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0408 11:01:34.535043    9084 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0408 11:01:34.541187    9084 out.go:177] 
	W0408 11:01:34.545108    9084 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0408 11:01:34.545137    9084 out.go:239] * 
	* 
	W0408 11:01:34.547472    9084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:01:34.564098    9084 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-476000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.41s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-688000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-688000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.842926167s)

                                                
                                                
-- stdout --
	* [pause-688000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-688000" primary control-plane node in "pause-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-688000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-688000 -n pause-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-688000 -n pause-688000: exit status 7 (32.985666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-535000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-535000 --driver=qemu2 : exit status 80 (9.836592s)

                                                
                                                
-- stdout --
	* [NoKubernetes-535000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-535000" primary control-plane node in "NoKubernetes-535000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-535000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-535000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-535000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000: exit status 7 (57.253917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --driver=qemu2 : exit status 80 (5.241531375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-535000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-535000
	* Restarting existing qemu2 VM for "NoKubernetes-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-535000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000: exit status 7 (67.998333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --driver=qemu2 : exit status 80 (5.256939208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-535000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-535000
	* Restarting existing qemu2 VM for "NoKubernetes-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-535000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000: exit status 7 (62.327833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-535000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-535000 --driver=qemu2 : exit status 80 (5.259270541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-535000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-535000
	* Restarting existing qemu2 VM for "NoKubernetes-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-535000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-535000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-535000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-535000 -n NoKubernetes-535000: exit status 7 (66.466875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-535000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.757248167s)

                                                
                                                
-- stdout --
	* [auto-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-363000" primary control-plane node in "auto-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:59:42.463519    9319 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:59:42.463645    9319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:59:42.463649    9319 out.go:304] Setting ErrFile to fd 2...
	I0408 10:59:42.463651    9319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:59:42.463785    9319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:59:42.464873    9319 out.go:298] Setting JSON to false
	I0408 10:59:42.481810    9319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7152,"bootTime":1712592030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:59:42.481884    9319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:59:42.487778    9319 out.go:177] * [auto-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:59:42.495701    9319 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:59:42.500778    9319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:59:42.495775    9319 notify.go:220] Checking for updates...
	I0408 10:59:42.506681    9319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:59:42.509727    9319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:59:42.512759    9319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:59:42.515697    9319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:59:42.519062    9319 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:59:42.519123    9319 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:59:42.519174    9319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:59:42.522636    9319 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:59:42.529748    9319 start.go:297] selected driver: qemu2
	I0408 10:59:42.529755    9319 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:59:42.529760    9319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:59:42.531958    9319 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:59:42.536736    9319 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:59:42.539848    9319 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:59:42.539896    9319 cni.go:84] Creating CNI manager for ""
	I0408 10:59:42.539905    9319 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:59:42.539912    9319 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:59:42.539941    9319 start.go:340] cluster config:
	{Name:auto-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:59:42.544283    9319 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:59:42.552741    9319 out.go:177] * Starting "auto-363000" primary control-plane node in "auto-363000" cluster
	I0408 10:59:42.555576    9319 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:59:42.555589    9319 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:59:42.555595    9319 cache.go:56] Caching tarball of preloaded images
	I0408 10:59:42.555642    9319 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:59:42.555648    9319 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:59:42.555703    9319 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/auto-363000/config.json ...
	I0408 10:59:42.555715    9319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/auto-363000/config.json: {Name:mk20a06eac6951529cb9cb31f2890ce1be696d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:59:42.555919    9319 start.go:360] acquireMachinesLock for auto-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:59:42.555951    9319 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "auto-363000"
	I0408 10:59:42.555961    9319 start.go:93] Provisioning new machine with config: &{Name:auto-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:59:42.555994    9319 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:59:42.563687    9319 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 10:59:42.578054    9319 start.go:159] libmachine.API.Create for "auto-363000" (driver="qemu2")
	I0408 10:59:42.578082    9319 client.go:168] LocalClient.Create starting
	I0408 10:59:42.578139    9319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:59:42.578168    9319 main.go:141] libmachine: Decoding PEM data...
	I0408 10:59:42.578177    9319 main.go:141] libmachine: Parsing certificate...
	I0408 10:59:42.578213    9319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:59:42.578233    9319 main.go:141] libmachine: Decoding PEM data...
	I0408 10:59:42.578239    9319 main.go:141] libmachine: Parsing certificate...
	I0408 10:59:42.578614    9319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:59:42.735756    9319 main.go:141] libmachine: Creating SSH key...
	I0408 10:59:42.789744    9319 main.go:141] libmachine: Creating Disk image...
	I0408 10:59:42.789749    9319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:59:42.789998    9319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2
	I0408 10:59:42.802251    9319 main.go:141] libmachine: STDOUT: 
	I0408 10:59:42.802268    9319 main.go:141] libmachine: STDERR: 
	I0408 10:59:42.802313    9319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2 +20000M
	I0408 10:59:42.813400    9319 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:59:42.813417    9319 main.go:141] libmachine: STDERR: 
	I0408 10:59:42.813436    9319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2
	I0408 10:59:42.813452    9319 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:59:42.813482    9319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:a4:10:b1:fb:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2
	I0408 10:59:42.815219    9319 main.go:141] libmachine: STDOUT: 
	I0408 10:59:42.815234    9319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:59:42.815254    9319 client.go:171] duration metric: took 237.1645ms to LocalClient.Create
	I0408 10:59:44.817358    9319 start.go:128] duration metric: took 2.26133975s to createHost
	I0408 10:59:44.817391    9319 start.go:83] releasing machines lock for "auto-363000", held for 2.26142075s
	W0408 10:59:44.817417    9319 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:59:44.826982    9319 out.go:177] * Deleting "auto-363000" in qemu2 ...
	W0408 10:59:44.849955    9319 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:59:44.849970    9319 start.go:728] Will try again in 5 seconds ...
	I0408 10:59:49.852244    9319 start.go:360] acquireMachinesLock for auto-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:59:49.852911    9319 start.go:364] duration metric: took 499.084µs to acquireMachinesLock for "auto-363000"
	I0408 10:59:49.853090    9319 start.go:93] Provisioning new machine with config: &{Name:auto-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:59:49.853415    9319 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:59:49.862774    9319 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 10:59:49.911138    9319 start.go:159] libmachine.API.Create for "auto-363000" (driver="qemu2")
	I0408 10:59:49.911191    9319 client.go:168] LocalClient.Create starting
	I0408 10:59:49.911293    9319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:59:49.911358    9319 main.go:141] libmachine: Decoding PEM data...
	I0408 10:59:49.911375    9319 main.go:141] libmachine: Parsing certificate...
	I0408 10:59:49.911437    9319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:59:49.911479    9319 main.go:141] libmachine: Decoding PEM data...
	I0408 10:59:49.911494    9319 main.go:141] libmachine: Parsing certificate...
	I0408 10:59:49.912153    9319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:59:50.072284    9319 main.go:141] libmachine: Creating SSH key...
	I0408 10:59:50.121676    9319 main.go:141] libmachine: Creating Disk image...
	I0408 10:59:50.121681    9319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:59:50.121914    9319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2
	I0408 10:59:50.134356    9319 main.go:141] libmachine: STDOUT: 
	I0408 10:59:50.134379    9319 main.go:141] libmachine: STDERR: 
	I0408 10:59:50.134433    9319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2 +20000M
	I0408 10:59:50.145387    9319 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:59:50.145401    9319 main.go:141] libmachine: STDERR: 
	I0408 10:59:50.145419    9319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2
	I0408 10:59:50.145423    9319 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:59:50.145450    9319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:11:e8:9f:b0:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/auto-363000/disk.qcow2
	I0408 10:59:50.147247    9319 main.go:141] libmachine: STDOUT: 
	I0408 10:59:50.147261    9319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:59:50.147272    9319 client.go:171] duration metric: took 236.073333ms to LocalClient.Create
	I0408 10:59:52.149383    9319 start.go:128] duration metric: took 2.295936375s to createHost
	I0408 10:59:52.149433    9319 start.go:83] releasing machines lock for "auto-363000", held for 2.296461s
	W0408 10:59:52.149562    9319 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:59:52.165960    9319 out.go:177] 
	W0408 10:59:52.168986    9319 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 10:59:52.168993    9319 out.go:239] * 
	* 
	W0408 10:59:52.169597    9319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:59:52.181898    9319 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.848274083s)

                                                
                                                
-- stdout --
	* [kindnet-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-363000" primary control-plane node in "kindnet-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:59:54.512042    9429 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:59:54.512180    9429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:59:54.512183    9429 out.go:304] Setting ErrFile to fd 2...
	I0408 10:59:54.512186    9429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:59:54.512343    9429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:59:54.513408    9429 out.go:298] Setting JSON to false
	I0408 10:59:54.529799    9429 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7164,"bootTime":1712592030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:59:54.529858    9429 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:59:54.537156    9429 out.go:177] * [kindnet-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:59:54.544091    9429 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:59:54.548004    9429 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:59:54.544130    9429 notify.go:220] Checking for updates...
	I0408 10:59:54.554132    9429 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:59:54.557075    9429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:59:54.560106    9429 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:59:54.563148    9429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:59:54.566500    9429 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:59:54.566565    9429 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 10:59:54.566612    9429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:59:54.571235    9429 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 10:59:54.577000    9429 start.go:297] selected driver: qemu2
	I0408 10:59:54.577007    9429 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:59:54.577013    9429 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:59:54.579216    9429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:59:54.582114    9429 out.go:177] * Automatically selected the socket_vmnet network
	I0408 10:59:54.585159    9429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 10:59:54.585190    9429 cni.go:84] Creating CNI manager for "kindnet"
	I0408 10:59:54.585196    9429 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 10:59:54.585222    9429 start.go:340] cluster config:
	{Name:kindnet-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:59:54.589398    9429 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:59:54.596138    9429 out.go:177] * Starting "kindnet-363000" primary control-plane node in "kindnet-363000" cluster
	I0408 10:59:54.600063    9429 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:59:54.600079    9429 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:59:54.600087    9429 cache.go:56] Caching tarball of preloaded images
	I0408 10:59:54.600147    9429 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 10:59:54.600152    9429 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:59:54.600231    9429 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kindnet-363000/config.json ...
	I0408 10:59:54.600243    9429 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kindnet-363000/config.json: {Name:mkbd2735dbf29a2788ae73bf8ff1ef205292816f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:59:54.600439    9429 start.go:360] acquireMachinesLock for kindnet-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 10:59:54.600465    9429 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "kindnet-363000"
	I0408 10:59:54.600475    9429 start.go:93] Provisioning new machine with config: &{Name:kindnet-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 10:59:54.600499    9429 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 10:59:54.609135    9429 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 10:59:54.623385    9429 start.go:159] libmachine.API.Create for "kindnet-363000" (driver="qemu2")
	I0408 10:59:54.623408    9429 client.go:168] LocalClient.Create starting
	I0408 10:59:54.623487    9429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 10:59:54.623518    9429 main.go:141] libmachine: Decoding PEM data...
	I0408 10:59:54.623529    9429 main.go:141] libmachine: Parsing certificate...
	I0408 10:59:54.623562    9429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 10:59:54.623585    9429 main.go:141] libmachine: Decoding PEM data...
	I0408 10:59:54.623594    9429 main.go:141] libmachine: Parsing certificate...
	I0408 10:59:54.623963    9429 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 10:59:54.789103    9429 main.go:141] libmachine: Creating SSH key...
	I0408 10:59:54.862747    9429 main.go:141] libmachine: Creating Disk image...
	I0408 10:59:54.862752    9429 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 10:59:54.862979    9429 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2
	I0408 10:59:54.875379    9429 main.go:141] libmachine: STDOUT: 
	I0408 10:59:54.875402    9429 main.go:141] libmachine: STDERR: 
	I0408 10:59:54.875460    9429 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2 +20000M
	I0408 10:59:54.886315    9429 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 10:59:54.886335    9429 main.go:141] libmachine: STDERR: 
	I0408 10:59:54.886350    9429 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2
	I0408 10:59:54.886362    9429 main.go:141] libmachine: Starting QEMU VM...
	I0408 10:59:54.886397    9429 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:1c:3e:89:ec:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2
	I0408 10:59:54.888152    9429 main.go:141] libmachine: STDOUT: 
	I0408 10:59:54.888170    9429 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 10:59:54.888191    9429 client.go:171] duration metric: took 264.775834ms to LocalClient.Create
	I0408 10:59:56.890425    9429 start.go:128] duration metric: took 2.289881709s to createHost
	I0408 10:59:56.890536    9429 start.go:83] releasing machines lock for "kindnet-363000", held for 2.290046708s
	W0408 10:59:56.890710    9429 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:59:56.905934    9429 out.go:177] * Deleting "kindnet-363000" in qemu2 ...
	W0408 10:59:56.936966    9429 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 10:59:56.936996    9429 start.go:728] Will try again in 5 seconds ...
	I0408 11:00:01.939250    9429 start.go:360] acquireMachinesLock for kindnet-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:01.939700    9429 start.go:364] duration metric: took 352.542µs to acquireMachinesLock for "kindnet-363000"
	I0408 11:00:01.939824    9429 start.go:93] Provisioning new machine with config: &{Name:kindnet-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:01.940090    9429 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:01.948478    9429 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:01.988662    9429 start.go:159] libmachine.API.Create for "kindnet-363000" (driver="qemu2")
	I0408 11:00:01.988715    9429 client.go:168] LocalClient.Create starting
	I0408 11:00:01.988818    9429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:01.988879    9429 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:01.988898    9429 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:01.988975    9429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:01.989023    9429 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:01.989038    9429 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:01.989544    9429 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:02.146790    9429 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:02.268998    9429 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:02.269005    9429 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:02.269258    9429 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2
	I0408 11:00:02.281810    9429 main.go:141] libmachine: STDOUT: 
	I0408 11:00:02.281837    9429 main.go:141] libmachine: STDERR: 
	I0408 11:00:02.281889    9429 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2 +20000M
	I0408 11:00:02.292624    9429 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:02.292643    9429 main.go:141] libmachine: STDERR: 
	I0408 11:00:02.292655    9429 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2
	I0408 11:00:02.292660    9429 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:02.292690    9429 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:20:b9:49:93:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kindnet-363000/disk.qcow2
	I0408 11:00:02.294450    9429 main.go:141] libmachine: STDOUT: 
	I0408 11:00:02.294468    9429 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:02.294480    9429 client.go:171] duration metric: took 305.758459ms to LocalClient.Create
	I0408 11:00:04.296701    9429 start.go:128] duration metric: took 2.356559584s to createHost
	I0408 11:00:04.296798    9429 start.go:83] releasing machines lock for "kindnet-363000", held for 2.357062958s
	W0408 11:00:04.297191    9429 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:04.305768    9429 out.go:177] 
	W0408 11:00:04.309768    9429 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:00:04.309784    9429 out.go:239] * 
	* 
	W0408 11:00:04.311347    9429 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:00:04.320489    9429 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.885460333s)

                                                
                                                
-- stdout --
	* [calico-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-363000" primary control-plane node in "calico-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:00:06.724191    9560 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:00:06.724335    9560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:06.724339    9560 out.go:304] Setting ErrFile to fd 2...
	I0408 11:00:06.724341    9560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:06.724489    9560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:00:06.725536    9560 out.go:298] Setting JSON to false
	I0408 11:00:06.741797    9560 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7176,"bootTime":1712592030,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:00:06.741855    9560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:00:06.749046    9560 out.go:177] * [calico-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:00:06.754912    9560 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:00:06.759874    9560 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:00:06.754971    9560 notify.go:220] Checking for updates...
	I0408 11:00:06.765764    9560 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:00:06.768871    9560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:00:06.771888    9560 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:00:06.778813    9560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:00:06.782347    9560 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:00:06.782414    9560 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:00:06.782481    9560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:00:06.786831    9560 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:00:06.793923    9560 start.go:297] selected driver: qemu2
	I0408 11:00:06.793931    9560 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:00:06.793939    9560 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:00:06.796401    9560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:00:06.798925    9560 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:00:06.800198    9560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:00:06.800248    9560 cni.go:84] Creating CNI manager for "calico"
	I0408 11:00:06.800254    9560 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0408 11:00:06.800309    9560 start.go:340] cluster config:
	{Name:calico-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:00:06.804932    9560 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:00:06.811934    9560 out.go:177] * Starting "calico-363000" primary control-plane node in "calico-363000" cluster
	I0408 11:00:06.815899    9560 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:00:06.815918    9560 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:00:06.815931    9560 cache.go:56] Caching tarball of preloaded images
	I0408 11:00:06.815995    9560 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:00:06.816008    9560 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:00:06.816079    9560 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/calico-363000/config.json ...
	I0408 11:00:06.816093    9560 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/calico-363000/config.json: {Name:mk66dec191e00e9cfd3470ec4d2b1b1d5642ed9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:00:06.816327    9560 start.go:360] acquireMachinesLock for calico-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:06.816361    9560 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "calico-363000"
	I0408 11:00:06.816372    9560 start.go:93] Provisioning new machine with config: &{Name:calico-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:06.816409    9560 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:06.824862    9560 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:06.841835    9560 start.go:159] libmachine.API.Create for "calico-363000" (driver="qemu2")
	I0408 11:00:06.841869    9560 client.go:168] LocalClient.Create starting
	I0408 11:00:06.841955    9560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:06.841984    9560 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:06.841997    9560 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:06.842038    9560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:06.842061    9560 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:06.842070    9560 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:06.842418    9560 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:06.994343    9560 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:07.108866    9560 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:07.108880    9560 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:07.109171    9560 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2
	I0408 11:00:07.121593    9560 main.go:141] libmachine: STDOUT: 
	I0408 11:00:07.121615    9560 main.go:141] libmachine: STDERR: 
	I0408 11:00:07.121678    9560 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2 +20000M
	I0408 11:00:07.133238    9560 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:07.133260    9560 main.go:141] libmachine: STDERR: 
	I0408 11:00:07.133274    9560 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2
	I0408 11:00:07.133279    9560 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:07.133306    9560 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:92:c5:d3:38:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2
	I0408 11:00:07.135160    9560 main.go:141] libmachine: STDOUT: 
	I0408 11:00:07.135176    9560 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:07.135197    9560 client.go:171] duration metric: took 293.318875ms to LocalClient.Create
	I0408 11:00:09.137410    9560 start.go:128] duration metric: took 2.320958667s to createHost
	I0408 11:00:09.137573    9560 start.go:83] releasing machines lock for "calico-363000", held for 2.321168541s
	W0408 11:00:09.137641    9560 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:09.148914    9560 out.go:177] * Deleting "calico-363000" in qemu2 ...
	W0408 11:00:09.182074    9560 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:09.182109    9560 start.go:728] Will try again in 5 seconds ...
	I0408 11:00:14.184424    9560 start.go:360] acquireMachinesLock for calico-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:14.184914    9560 start.go:364] duration metric: took 373.542µs to acquireMachinesLock for "calico-363000"
	I0408 11:00:14.185051    9560 start.go:93] Provisioning new machine with config: &{Name:calico-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:14.185313    9560 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:14.195058    9560 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:14.237424    9560 start.go:159] libmachine.API.Create for "calico-363000" (driver="qemu2")
	I0408 11:00:14.237485    9560 client.go:168] LocalClient.Create starting
	I0408 11:00:14.237604    9560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:14.237669    9560 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:14.237682    9560 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:14.237742    9560 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:14.237784    9560 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:14.237795    9560 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:14.238364    9560 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:14.396665    9560 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:14.511216    9560 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:14.511223    9560 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:14.511473    9560 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2
	I0408 11:00:14.524051    9560 main.go:141] libmachine: STDOUT: 
	I0408 11:00:14.524072    9560 main.go:141] libmachine: STDERR: 
	I0408 11:00:14.524123    9560 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2 +20000M
	I0408 11:00:14.534924    9560 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:14.534943    9560 main.go:141] libmachine: STDERR: 
	I0408 11:00:14.534957    9560 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2
	I0408 11:00:14.534963    9560 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:14.535007    9560 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d8:e2:1e:0a:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/calico-363000/disk.qcow2
	I0408 11:00:14.536787    9560 main.go:141] libmachine: STDOUT: 
	I0408 11:00:14.536803    9560 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:14.536814    9560 client.go:171] duration metric: took 299.31975ms to LocalClient.Create
	I0408 11:00:16.539157    9560 start.go:128] duration metric: took 2.353664625s to createHost
	I0408 11:00:16.539250    9560 start.go:83] releasing machines lock for "calico-363000", held for 2.354291541s
	W0408 11:00:16.539591    9560 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:16.549324    9560 out.go:177] 
	W0408 11:00:16.556381    9560 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:00:16.556431    9560 out.go:239] * 
	* 
	W0408 11:00:16.559497    9560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:00:16.569288    9560 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.83138625s)

                                                
                                                
-- stdout --
	* [custom-flannel-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-363000" primary control-plane node in "custom-flannel-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:00:19.089894    9681 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:00:19.090025    9681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:19.090029    9681 out.go:304] Setting ErrFile to fd 2...
	I0408 11:00:19.090031    9681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:19.090157    9681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:00:19.091250    9681 out.go:298] Setting JSON to false
	I0408 11:00:19.107708    9681 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7189,"bootTime":1712592030,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:00:19.107772    9681 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:00:19.113504    9681 out.go:177] * [custom-flannel-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:00:19.121477    9681 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:00:19.125446    9681 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:00:19.121530    9681 notify.go:220] Checking for updates...
	I0408 11:00:19.128546    9681 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:00:19.131495    9681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:00:19.134461    9681 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:00:19.137512    9681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:00:19.140677    9681 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:00:19.140750    9681 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:00:19.140791    9681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:00:19.145448    9681 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:00:19.152378    9681 start.go:297] selected driver: qemu2
	I0408 11:00:19.152385    9681 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:00:19.152391    9681 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:00:19.154701    9681 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:00:19.157510    9681 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:00:19.160745    9681 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:00:19.160795    9681 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0408 11:00:19.160803    9681 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0408 11:00:19.160840    9681 start.go:340] cluster config:
	{Name:custom-flannel-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:00:19.165436    9681 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:00:19.172457    9681 out.go:177] * Starting "custom-flannel-363000" primary control-plane node in "custom-flannel-363000" cluster
	I0408 11:00:19.176462    9681 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:00:19.176480    9681 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:00:19.176488    9681 cache.go:56] Caching tarball of preloaded images
	I0408 11:00:19.176544    9681 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:00:19.176550    9681 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:00:19.176618    9681 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/custom-flannel-363000/config.json ...
	I0408 11:00:19.176631    9681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/custom-flannel-363000/config.json: {Name:mkcd56e5ac95d0646d71d3136bad3e853a72faef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:00:19.176843    9681 start.go:360] acquireMachinesLock for custom-flannel-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:19.176872    9681 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "custom-flannel-363000"
	I0408 11:00:19.176883    9681 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:19.176922    9681 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:19.185488    9681 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:19.202001    9681 start.go:159] libmachine.API.Create for "custom-flannel-363000" (driver="qemu2")
	I0408 11:00:19.202032    9681 client.go:168] LocalClient.Create starting
	I0408 11:00:19.202097    9681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:19.202129    9681 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:19.202139    9681 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:19.202177    9681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:19.202202    9681 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:19.202208    9681 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:19.202558    9681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:19.352840    9681 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:19.407010    9681 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:19.407015    9681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:19.407259    9681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2
	I0408 11:00:19.419750    9681 main.go:141] libmachine: STDOUT: 
	I0408 11:00:19.419787    9681 main.go:141] libmachine: STDERR: 
	I0408 11:00:19.419852    9681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2 +20000M
	I0408 11:00:19.430678    9681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:19.430695    9681 main.go:141] libmachine: STDERR: 
	I0408 11:00:19.430707    9681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2
	I0408 11:00:19.430715    9681 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:19.430751    9681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8a:cc:79:98:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2
	I0408 11:00:19.432505    9681 main.go:141] libmachine: STDOUT: 
	I0408 11:00:19.432530    9681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:19.432549    9681 client.go:171] duration metric: took 230.510542ms to LocalClient.Create
	I0408 11:00:21.434782    9681 start.go:128] duration metric: took 2.257811334s to createHost
	I0408 11:00:21.434862    9681 start.go:83] releasing machines lock for "custom-flannel-363000", held for 2.257966834s
	W0408 11:00:21.434962    9681 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:21.446411    9681 out.go:177] * Deleting "custom-flannel-363000" in qemu2 ...
	W0408 11:00:21.479597    9681 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:21.479655    9681 start.go:728] Will try again in 5 seconds ...
	I0408 11:00:26.481916    9681 start.go:360] acquireMachinesLock for custom-flannel-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:26.482469    9681 start.go:364] duration metric: took 448.458µs to acquireMachinesLock for "custom-flannel-363000"
	I0408 11:00:26.482593    9681 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:26.482965    9681 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:26.492567    9681 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:26.540106    9681 start.go:159] libmachine.API.Create for "custom-flannel-363000" (driver="qemu2")
	I0408 11:00:26.540157    9681 client.go:168] LocalClient.Create starting
	I0408 11:00:26.540273    9681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:26.540331    9681 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:26.540349    9681 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:26.540400    9681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:26.540441    9681 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:26.540452    9681 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:26.541011    9681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:26.700884    9681 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:26.818941    9681 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:26.818948    9681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:26.819204    9681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2
	I0408 11:00:26.831914    9681 main.go:141] libmachine: STDOUT: 
	I0408 11:00:26.831936    9681 main.go:141] libmachine: STDERR: 
	I0408 11:00:26.831991    9681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2 +20000M
	I0408 11:00:26.842852    9681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:26.842872    9681 main.go:141] libmachine: STDERR: 
	I0408 11:00:26.842884    9681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2
	I0408 11:00:26.842889    9681 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:26.842932    9681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:fc:44:6f:e4:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/custom-flannel-363000/disk.qcow2
	I0408 11:00:26.844756    9681 main.go:141] libmachine: STDOUT: 
	I0408 11:00:26.844785    9681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:26.844798    9681 client.go:171] duration metric: took 304.632167ms to LocalClient.Create
	I0408 11:00:28.847011    9681 start.go:128] duration metric: took 2.363992s to createHost
	I0408 11:00:28.847087    9681 start.go:83] releasing machines lock for "custom-flannel-363000", held for 2.364574708s
	W0408 11:00:28.847423    9681 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:28.857111    9681 out.go:177] 
	W0408 11:00:28.864333    9681 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:00:28.864410    9681 out.go:239] * 
	* 
	W0408 11:00:28.866989    9681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:00:28.876150    9681 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.855146208s)

                                                
                                                
-- stdout --
	* [false-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-363000" primary control-plane node in "false-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:00:31.384908    9799 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:00:31.385034    9799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:31.385037    9799 out.go:304] Setting ErrFile to fd 2...
	I0408 11:00:31.385040    9799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:31.385171    9799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:00:31.386246    9799 out.go:298] Setting JSON to false
	I0408 11:00:31.402648    9799 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7201,"bootTime":1712592030,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:00:31.402715    9799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:00:31.409916    9799 out.go:177] * [false-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:00:31.422022    9799 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:00:31.418073    9799 notify.go:220] Checking for updates...
	I0408 11:00:31.428025    9799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:00:31.430917    9799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:00:31.433992    9799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:00:31.437034    9799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:00:31.438394    9799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:00:31.441378    9799 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:00:31.441457    9799 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:00:31.441506    9799 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:00:31.446041    9799 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:00:31.451015    9799 start.go:297] selected driver: qemu2
	I0408 11:00:31.451024    9799 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:00:31.451031    9799 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:00:31.453438    9799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:00:31.456991    9799 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:00:31.460096    9799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:00:31.460129    9799 cni.go:84] Creating CNI manager for "false"
	I0408 11:00:31.460165    9799 start.go:340] cluster config:
	{Name:false-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:00:31.464851    9799 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:00:31.472028    9799 out.go:177] * Starting "false-363000" primary control-plane node in "false-363000" cluster
	I0408 11:00:31.475873    9799 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:00:31.475887    9799 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:00:31.475894    9799 cache.go:56] Caching tarball of preloaded images
	I0408 11:00:31.475954    9799 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:00:31.475960    9799 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:00:31.476022    9799 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/false-363000/config.json ...
	I0408 11:00:31.476035    9799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/false-363000/config.json: {Name:mk09438225516a7897c9b5253cb7bbee29270f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:00:31.476253    9799 start.go:360] acquireMachinesLock for false-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:31.476285    9799 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "false-363000"
	I0408 11:00:31.476296    9799 start.go:93] Provisioning new machine with config: &{Name:false-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:31.476324    9799 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:31.483953    9799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:31.500713    9799 start.go:159] libmachine.API.Create for "false-363000" (driver="qemu2")
	I0408 11:00:31.500740    9799 client.go:168] LocalClient.Create starting
	I0408 11:00:31.500804    9799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:31.500833    9799 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:31.500842    9799 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:31.500884    9799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:31.500906    9799 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:31.500915    9799 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:31.501259    9799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:31.652799    9799 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:31.714813    9799 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:31.714821    9799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:31.715117    9799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2
	I0408 11:00:31.728716    9799 main.go:141] libmachine: STDOUT: 
	I0408 11:00:31.728741    9799 main.go:141] libmachine: STDERR: 
	I0408 11:00:31.728807    9799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2 +20000M
	I0408 11:00:31.740856    9799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:31.740878    9799 main.go:141] libmachine: STDERR: 
	I0408 11:00:31.740899    9799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2
	I0408 11:00:31.740913    9799 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:31.740954    9799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:89:81:97:e1:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2
	I0408 11:00:31.743064    9799 main.go:141] libmachine: STDOUT: 
	I0408 11:00:31.743085    9799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:31.743106    9799 client.go:171] duration metric: took 242.356042ms to LocalClient.Create
	I0408 11:00:33.745455    9799 start.go:128] duration metric: took 2.269051584s to createHost
	I0408 11:00:33.745570    9799 start.go:83] releasing machines lock for "false-363000", held for 2.269262083s
	W0408 11:00:33.745626    9799 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:33.755630    9799 out.go:177] * Deleting "false-363000" in qemu2 ...
	W0408 11:00:33.781127    9799 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:33.781159    9799 start.go:728] Will try again in 5 seconds ...
	I0408 11:00:38.783360    9799 start.go:360] acquireMachinesLock for false-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:38.783852    9799 start.go:364] duration metric: took 398.958µs to acquireMachinesLock for "false-363000"
	I0408 11:00:38.783966    9799 start.go:93] Provisioning new machine with config: &{Name:false-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:38.784273    9799 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:38.789908    9799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:38.833490    9799 start.go:159] libmachine.API.Create for "false-363000" (driver="qemu2")
	I0408 11:00:38.833557    9799 client.go:168] LocalClient.Create starting
	I0408 11:00:38.833680    9799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:38.833741    9799 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:38.833760    9799 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:38.833822    9799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:38.833863    9799 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:38.833875    9799 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:38.834419    9799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:38.994071    9799 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:39.145185    9799 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:39.145192    9799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:39.145455    9799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2
	I0408 11:00:39.158075    9799 main.go:141] libmachine: STDOUT: 
	I0408 11:00:39.158100    9799 main.go:141] libmachine: STDERR: 
	I0408 11:00:39.158153    9799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2 +20000M
	I0408 11:00:39.169310    9799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:39.169335    9799 main.go:141] libmachine: STDERR: 
	I0408 11:00:39.169344    9799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2
	I0408 11:00:39.169351    9799 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:39.169390    9799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f4:83:84:ec:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/false-363000/disk.qcow2
	I0408 11:00:39.171137    9799 main.go:141] libmachine: STDOUT: 
	I0408 11:00:39.171155    9799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:39.171168    9799 client.go:171] duration metric: took 337.60275ms to LocalClient.Create
	I0408 11:00:41.173399    9799 start.go:128] duration metric: took 2.389063708s to createHost
	I0408 11:00:41.173481    9799 start.go:83] releasing machines lock for "false-363000", held for 2.389588334s
	W0408 11:00:41.173880    9799 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:41.182439    9799 out.go:177] 
	W0408 11:00:41.190674    9799 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:00:41.190735    9799 out.go:239] * 
	* 
	W0408 11:00:41.193325    9799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:00:41.198578    9799 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.812190083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-363000" primary control-plane node in "enable-default-cni-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:00:43.446319    9913 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:00:43.446466    9913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:43.446469    9913 out.go:304] Setting ErrFile to fd 2...
	I0408 11:00:43.446471    9913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:43.446606    9913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:00:43.447693    9913 out.go:298] Setting JSON to false
	I0408 11:00:43.464006    9913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7213,"bootTime":1712592030,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:00:43.464081    9913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:00:43.469015    9913 out.go:177] * [enable-default-cni-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:00:43.477888    9913 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:00:43.482729    9913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:00:43.477980    9913 notify.go:220] Checking for updates...
	I0408 11:00:43.488866    9913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:00:43.491891    9913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:00:43.494916    9913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:00:43.497852    9913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:00:43.501270    9913 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:00:43.501335    9913 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:00:43.501386    9913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:00:43.505788    9913 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:00:43.512877    9913 start.go:297] selected driver: qemu2
	I0408 11:00:43.512883    9913 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:00:43.512890    9913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:00:43.515239    9913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:00:43.518752    9913 out.go:177] * Automatically selected the socket_vmnet network
	E0408 11:00:43.521926    9913 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0408 11:00:43.521950    9913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:00:43.521993    9913 cni.go:84] Creating CNI manager for "bridge"
	I0408 11:00:43.521998    9913 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:00:43.522048    9913 start.go:340] cluster config:
	{Name:enable-default-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:00:43.526709    9913 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:00:43.531802    9913 out.go:177] * Starting "enable-default-cni-363000" primary control-plane node in "enable-default-cni-363000" cluster
	I0408 11:00:43.535909    9913 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:00:43.535922    9913 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:00:43.535930    9913 cache.go:56] Caching tarball of preloaded images
	I0408 11:00:43.535978    9913 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:00:43.535983    9913 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:00:43.536037    9913 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/enable-default-cni-363000/config.json ...
	I0408 11:00:43.536049    9913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/enable-default-cni-363000/config.json: {Name:mk0020006166c9968584702533183943f1d87f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:00:43.536253    9913 start.go:360] acquireMachinesLock for enable-default-cni-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:43.536285    9913 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "enable-default-cni-363000"
	I0408 11:00:43.536296    9913 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:43.536324    9913 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:43.544846    9913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:43.561197    9913 start.go:159] libmachine.API.Create for "enable-default-cni-363000" (driver="qemu2")
	I0408 11:00:43.561224    9913 client.go:168] LocalClient.Create starting
	I0408 11:00:43.561281    9913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:43.561312    9913 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:43.561323    9913 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:43.561363    9913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:43.561384    9913 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:43.561390    9913 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:43.561758    9913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:43.714261    9913 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:43.830856    9913 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:43.830869    9913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:43.831121    9913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2
	I0408 11:00:43.843916    9913 main.go:141] libmachine: STDOUT: 
	I0408 11:00:43.843945    9913 main.go:141] libmachine: STDERR: 
	I0408 11:00:43.844003    9913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2 +20000M
	I0408 11:00:43.854858    9913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:43.854880    9913 main.go:141] libmachine: STDERR: 
	I0408 11:00:43.854897    9913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2
	I0408 11:00:43.854904    9913 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:43.854938    9913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:25:81:66:c4:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2
	I0408 11:00:43.856795    9913 main.go:141] libmachine: STDOUT: 
	I0408 11:00:43.856817    9913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:43.856840    9913 client.go:171] duration metric: took 295.609041ms to LocalClient.Create
	I0408 11:00:45.858951    9913 start.go:128] duration metric: took 2.322600625s to createHost
	I0408 11:00:45.858986    9913 start.go:83] releasing machines lock for "enable-default-cni-363000", held for 2.322682291s
	W0408 11:00:45.859024    9913 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:45.867813    9913 out.go:177] * Deleting "enable-default-cni-363000" in qemu2 ...
	W0408 11:00:45.881941    9913 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:45.881966    9913 start.go:728] Will try again in 5 seconds ...
	I0408 11:00:50.884185    9913 start.go:360] acquireMachinesLock for enable-default-cni-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:50.884676    9913 start.go:364] duration metric: took 410.792µs to acquireMachinesLock for "enable-default-cni-363000"
	I0408 11:00:50.884733    9913 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:50.885044    9913 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:50.893572    9913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:50.937965    9913 start.go:159] libmachine.API.Create for "enable-default-cni-363000" (driver="qemu2")
	I0408 11:00:50.938016    9913 client.go:168] LocalClient.Create starting
	I0408 11:00:50.938132    9913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:50.938196    9913 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:50.938216    9913 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:50.938287    9913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:50.938330    9913 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:50.938342    9913 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:50.938875    9913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:51.099625    9913 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:51.159413    9913 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:51.159419    9913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:51.159668    9913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2
	I0408 11:00:51.172409    9913 main.go:141] libmachine: STDOUT: 
	I0408 11:00:51.172427    9913 main.go:141] libmachine: STDERR: 
	I0408 11:00:51.172480    9913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2 +20000M
	I0408 11:00:51.183297    9913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:51.183313    9913 main.go:141] libmachine: STDERR: 
	I0408 11:00:51.183324    9913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2
	I0408 11:00:51.183331    9913 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:51.183362    9913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:e6:e8:c3:a1:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/enable-default-cni-363000/disk.qcow2
	I0408 11:00:51.185117    9913 main.go:141] libmachine: STDOUT: 
	I0408 11:00:51.185131    9913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:51.185143    9913 client.go:171] duration metric: took 247.1215ms to LocalClient.Create
	I0408 11:00:53.187320    9913 start.go:128] duration metric: took 2.3022335s to createHost
	I0408 11:00:53.187390    9913 start.go:83] releasing machines lock for "enable-default-cni-363000", held for 2.302680083s
	W0408 11:00:53.187716    9913 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:53.198219    9913 out.go:177] 
	W0408 11:00:53.204331    9913 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:00:53.204368    9913 out.go:239] * 
	* 
	W0408 11:00:53.211344    9913 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:00:53.219193    9913 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.802441s)

                                                
                                                
-- stdout --
	* [flannel-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-363000" primary control-plane node in "flannel-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:00:55.450292   10030 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:00:55.450440   10030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:55.450443   10030 out.go:304] Setting ErrFile to fd 2...
	I0408 11:00:55.450446   10030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:00:55.450596   10030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:00:55.451965   10030 out.go:298] Setting JSON to false
	I0408 11:00:55.470529   10030 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7225,"bootTime":1712592030,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:00:55.470614   10030 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:00:55.475833   10030 out.go:177] * [flannel-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:00:55.483800   10030 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:00:55.483857   10030 notify.go:220] Checking for updates...
	I0408 11:00:55.490775   10030 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:00:55.493808   10030 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:00:55.496760   10030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:00:55.499785   10030 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:00:55.502778   10030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:00:55.506191   10030 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:00:55.506260   10030 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:00:55.506300   10030 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:00:55.510738   10030 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:00:55.517817   10030 start.go:297] selected driver: qemu2
	I0408 11:00:55.517827   10030 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:00:55.517835   10030 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:00:55.520365   10030 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:00:55.523788   10030 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:00:55.526829   10030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:00:55.526881   10030 cni.go:84] Creating CNI manager for "flannel"
	I0408 11:00:55.526886   10030 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0408 11:00:55.526927   10030 start.go:340] cluster config:
	{Name:flannel-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:00:55.531820   10030 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:00:55.538783   10030 out.go:177] * Starting "flannel-363000" primary control-plane node in "flannel-363000" cluster
	I0408 11:00:55.542767   10030 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:00:55.542796   10030 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:00:55.542807   10030 cache.go:56] Caching tarball of preloaded images
	I0408 11:00:55.542885   10030 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:00:55.542892   10030 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:00:55.542955   10030 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/flannel-363000/config.json ...
	I0408 11:00:55.542968   10030 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/flannel-363000/config.json: {Name:mkef1ad0565059808a4af8bfeaa41402aa264d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:00:55.543278   10030 start.go:360] acquireMachinesLock for flannel-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:00:55.543307   10030 start.go:364] duration metric: took 24.208µs to acquireMachinesLock for "flannel-363000"
	I0408 11:00:55.543317   10030 start.go:93] Provisioning new machine with config: &{Name:flannel-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:00:55.543349   10030 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:00:55.551765   10030 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:00:55.566928   10030 start.go:159] libmachine.API.Create for "flannel-363000" (driver="qemu2")
	I0408 11:00:55.566960   10030 client.go:168] LocalClient.Create starting
	I0408 11:00:55.567047   10030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:00:55.567077   10030 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:55.567085   10030 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:55.567122   10030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:00:55.567144   10030 main.go:141] libmachine: Decoding PEM data...
	I0408 11:00:55.567151   10030 main.go:141] libmachine: Parsing certificate...
	I0408 11:00:55.567610   10030 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:00:55.717309   10030 main.go:141] libmachine: Creating SSH key...
	I0408 11:00:55.782434   10030 main.go:141] libmachine: Creating Disk image...
	I0408 11:00:55.782441   10030 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:00:55.782687   10030 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2
	I0408 11:00:55.795282   10030 main.go:141] libmachine: STDOUT: 
	I0408 11:00:55.795304   10030 main.go:141] libmachine: STDERR: 
	I0408 11:00:55.795361   10030 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2 +20000M
	I0408 11:00:55.806336   10030 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:00:55.806363   10030 main.go:141] libmachine: STDERR: 
	I0408 11:00:55.806383   10030 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2
	I0408 11:00:55.806387   10030 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:00:55.806415   10030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:c1:93:76:9c:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2
	I0408 11:00:55.808311   10030 main.go:141] libmachine: STDOUT: 
	I0408 11:00:55.808326   10030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:00:55.808344   10030 client.go:171] duration metric: took 241.377375ms to LocalClient.Create
	I0408 11:00:57.810574   10030 start.go:128] duration metric: took 2.267181583s to createHost
	I0408 11:00:57.810687   10030 start.go:83] releasing machines lock for "flannel-363000", held for 2.267356083s
	W0408 11:00:57.810751   10030 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:57.821028   10030 out.go:177] * Deleting "flannel-363000" in qemu2 ...
	W0408 11:00:57.849837   10030 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:00:57.849866   10030 start.go:728] Will try again in 5 seconds ...
	I0408 11:01:02.852140   10030 start.go:360] acquireMachinesLock for flannel-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:02.852711   10030 start.go:364] duration metric: took 476.667µs to acquireMachinesLock for "flannel-363000"
	I0408 11:01:02.852861   10030 start.go:93] Provisioning new machine with config: &{Name:flannel-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:02.853337   10030 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:02.858159   10030 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:01:02.908658   10030 start.go:159] libmachine.API.Create for "flannel-363000" (driver="qemu2")
	I0408 11:01:02.908712   10030 client.go:168] LocalClient.Create starting
	I0408 11:01:02.908859   10030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:02.908920   10030 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:02.908938   10030 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:02.909008   10030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:02.909051   10030 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:02.909063   10030 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:02.909652   10030 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:03.072883   10030 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:03.150658   10030 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:03.150666   10030 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:03.150960   10030 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2
	I0408 11:01:03.164915   10030 main.go:141] libmachine: STDOUT: 
	I0408 11:01:03.164948   10030 main.go:141] libmachine: STDERR: 
	I0408 11:01:03.165016   10030 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2 +20000M
	I0408 11:01:03.177632   10030 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:03.177660   10030 main.go:141] libmachine: STDERR: 
	I0408 11:01:03.177679   10030 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2
	I0408 11:01:03.177697   10030 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:03.177736   10030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:d2:c6:52:e0:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/flannel-363000/disk.qcow2
	I0408 11:01:03.179941   10030 main.go:141] libmachine: STDOUT: 
	I0408 11:01:03.179960   10030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:03.179975   10030 client.go:171] duration metric: took 271.25475ms to LocalClient.Create
	I0408 11:01:05.182322   10030 start.go:128] duration metric: took 2.328915041s to createHost
	I0408 11:01:05.182461   10030 start.go:83] releasing machines lock for "flannel-363000", held for 2.329706334s
	W0408 11:01:05.182855   10030 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:05.193587   10030 out.go:177] 
	W0408 11:01:05.196672   10030 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:01:05.196702   10030 out.go:239] * 
	* 
	W0408 11:01:05.199738   10030 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:01:05.209586   10030 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.780169292s)

                                                
                                                
-- stdout --
	* [bridge-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-363000" primary control-plane node in "bridge-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:01:07.623454   10150 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:01:07.623569   10150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:07.623572   10150 out.go:304] Setting ErrFile to fd 2...
	I0408 11:01:07.623575   10150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:07.623695   10150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:01:07.624856   10150 out.go:298] Setting JSON to false
	I0408 11:01:07.641540   10150 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7237,"bootTime":1712592030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:01:07.641620   10150 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:01:07.646357   10150 out.go:177] * [bridge-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:01:07.655387   10150 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:01:07.655441   10150 notify.go:220] Checking for updates...
	I0408 11:01:07.663327   10150 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:01:07.666373   10150 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:01:07.669368   10150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:01:07.672406   10150 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:01:07.675385   10150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:01:07.678639   10150 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:01:07.678701   10150 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:01:07.678750   10150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:01:07.683348   10150 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:01:07.690396   10150 start.go:297] selected driver: qemu2
	I0408 11:01:07.690404   10150 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:01:07.690412   10150 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:01:07.692610   10150 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:01:07.696431   10150 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:01:07.699435   10150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:01:07.699467   10150 cni.go:84] Creating CNI manager for "bridge"
	I0408 11:01:07.699477   10150 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:01:07.699502   10150 start.go:340] cluster config:
	{Name:bridge-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:01:07.703939   10150 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:07.711353   10150 out.go:177] * Starting "bridge-363000" primary control-plane node in "bridge-363000" cluster
	I0408 11:01:07.714308   10150 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:01:07.714321   10150 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:01:07.714327   10150 cache.go:56] Caching tarball of preloaded images
	I0408 11:01:07.714376   10150 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:01:07.714381   10150 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:01:07.714428   10150 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/bridge-363000/config.json ...
	I0408 11:01:07.714440   10150 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/bridge-363000/config.json: {Name:mk06fbbdb48d80662a8398d0d3edaddd59e8e384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:01:07.714640   10150 start.go:360] acquireMachinesLock for bridge-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:07.714669   10150 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "bridge-363000"
	I0408 11:01:07.714680   10150 start.go:93] Provisioning new machine with config: &{Name:bridge-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:07.714706   10150 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:07.722298   10150 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:01:07.737478   10150 start.go:159] libmachine.API.Create for "bridge-363000" (driver="qemu2")
	I0408 11:01:07.737503   10150 client.go:168] LocalClient.Create starting
	I0408 11:01:07.737566   10150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:07.737598   10150 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:07.737607   10150 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:07.737642   10150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:07.737664   10150 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:07.737672   10150 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:07.737977   10150 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:07.889005   10150 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:07.960128   10150 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:07.960134   10150 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:07.960384   10150 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2
	I0408 11:01:07.973094   10150 main.go:141] libmachine: STDOUT: 
	I0408 11:01:07.973121   10150 main.go:141] libmachine: STDERR: 
	I0408 11:01:07.973177   10150 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2 +20000M
	I0408 11:01:07.983905   10150 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:07.983932   10150 main.go:141] libmachine: STDERR: 
	I0408 11:01:07.983954   10150 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2
	I0408 11:01:07.983959   10150 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:07.983993   10150 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:65:81:1c:5e:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2
	I0408 11:01:07.985856   10150 main.go:141] libmachine: STDOUT: 
	I0408 11:01:07.985882   10150 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:07.985903   10150 client.go:171] duration metric: took 248.393417ms to LocalClient.Create
	I0408 11:01:09.988162   10150 start.go:128] duration metric: took 2.273411375s to createHost
	I0408 11:01:09.988259   10150 start.go:83] releasing machines lock for "bridge-363000", held for 2.273566209s
	W0408 11:01:09.988315   10150 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:09.999379   10150 out.go:177] * Deleting "bridge-363000" in qemu2 ...
	W0408 11:01:10.031160   10150 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:10.031188   10150 start.go:728] Will try again in 5 seconds ...
	I0408 11:01:15.033547   10150 start.go:360] acquireMachinesLock for bridge-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:15.034061   10150 start.go:364] duration metric: took 403.041µs to acquireMachinesLock for "bridge-363000"
	I0408 11:01:15.034190   10150 start.go:93] Provisioning new machine with config: &{Name:bridge-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:15.034540   10150 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:15.043443   10150 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:01:15.091793   10150 start.go:159] libmachine.API.Create for "bridge-363000" (driver="qemu2")
	I0408 11:01:15.091850   10150 client.go:168] LocalClient.Create starting
	I0408 11:01:15.091975   10150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:15.092039   10150 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:15.092057   10150 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:15.092121   10150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:15.092164   10150 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:15.092175   10150 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:15.092704   10150 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:15.251329   10150 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:15.300212   10150 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:15.300218   10150 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:15.300449   10150 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2
	I0408 11:01:15.312874   10150 main.go:141] libmachine: STDOUT: 
	I0408 11:01:15.312905   10150 main.go:141] libmachine: STDERR: 
	I0408 11:01:15.312959   10150 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2 +20000M
	I0408 11:01:15.323851   10150 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:15.323871   10150 main.go:141] libmachine: STDERR: 
	I0408 11:01:15.323887   10150 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2
	I0408 11:01:15.323891   10150 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:15.323918   10150 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:98:5b:69:fd:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/bridge-363000/disk.qcow2
	I0408 11:01:15.325726   10150 main.go:141] libmachine: STDOUT: 
	I0408 11:01:15.325750   10150 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:15.325763   10150 client.go:171] duration metric: took 233.906708ms to LocalClient.Create
	I0408 11:01:17.326872   10150 start.go:128] duration metric: took 2.292281208s to createHost
	I0408 11:01:17.326920   10150 start.go:83] releasing machines lock for "bridge-363000", held for 2.292821875s
	W0408 11:01:17.327071   10150 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:17.345340   10150 out.go:177] 
	W0408 11:01:17.349372   10150 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:01:17.349385   10150 out.go:239] * 
	* 
	W0408 11:01:17.350634   10150 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:01:17.362357   10150 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-363000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.915517375s)

                                                
                                                
-- stdout --
	* [kubenet-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-363000" primary control-plane node in "kubenet-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:01:19.676437   10268 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:01:19.676559   10268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:19.676563   10268 out.go:304] Setting ErrFile to fd 2...
	I0408 11:01:19.676565   10268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:19.676706   10268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:01:19.677878   10268 out.go:298] Setting JSON to false
	I0408 11:01:19.694499   10268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7249,"bootTime":1712592030,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:01:19.694561   10268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:01:19.702076   10268 out.go:177] * [kubenet-363000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:01:19.714070   10268 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:01:19.709141   10268 notify.go:220] Checking for updates...
	I0408 11:01:19.721048   10268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:01:19.724075   10268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:01:19.727062   10268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:01:19.730097   10268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:01:19.733050   10268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:01:19.736484   10268 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:01:19.736569   10268 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:01:19.736613   10268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:01:19.741056   10268 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:01:19.748109   10268 start.go:297] selected driver: qemu2
	I0408 11:01:19.748119   10268 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:01:19.748128   10268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:01:19.750558   10268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:01:19.754044   10268 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:01:19.757054   10268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:01:19.757089   10268 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0408 11:01:19.757130   10268 start.go:340] cluster config:
	{Name:kubenet-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:01:19.761891   10268 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:19.769067   10268 out.go:177] * Starting "kubenet-363000" primary control-plane node in "kubenet-363000" cluster
	I0408 11:01:19.773088   10268 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:01:19.773104   10268 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:01:19.773114   10268 cache.go:56] Caching tarball of preloaded images
	I0408 11:01:19.773171   10268 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:01:19.773177   10268 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:01:19.773246   10268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kubenet-363000/config.json ...
	I0408 11:01:19.773260   10268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/kubenet-363000/config.json: {Name:mk25f9fcd197c2763b04f5b03652deec4794e938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:01:19.773468   10268 start.go:360] acquireMachinesLock for kubenet-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:19.773499   10268 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "kubenet-363000"
	I0408 11:01:19.773509   10268 start.go:93] Provisioning new machine with config: &{Name:kubenet-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:19.773533   10268 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:19.781093   10268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:01:19.797115   10268 start.go:159] libmachine.API.Create for "kubenet-363000" (driver="qemu2")
	I0408 11:01:19.797152   10268 client.go:168] LocalClient.Create starting
	I0408 11:01:19.797203   10268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:19.797231   10268 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:19.797238   10268 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:19.797280   10268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:19.797301   10268 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:19.797311   10268 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:19.797623   10268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:19.947851   10268 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:20.001939   10268 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:20.001945   10268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:20.002170   10268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2
	I0408 11:01:20.014577   10268 main.go:141] libmachine: STDOUT: 
	I0408 11:01:20.014599   10268 main.go:141] libmachine: STDERR: 
	I0408 11:01:20.014651   10268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2 +20000M
	I0408 11:01:20.025293   10268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:20.025317   10268 main.go:141] libmachine: STDERR: 
	I0408 11:01:20.025336   10268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2
	I0408 11:01:20.025340   10268 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:20.025367   10268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:2f:e2:9c:87:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2
	I0408 11:01:20.027151   10268 main.go:141] libmachine: STDOUT: 
	I0408 11:01:20.027166   10268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:20.027185   10268 client.go:171] duration metric: took 230.027292ms to LocalClient.Create
	I0408 11:01:22.029393   10268 start.go:128] duration metric: took 2.25581925s to createHost
	I0408 11:01:22.029460   10268 start.go:83] releasing machines lock for "kubenet-363000", held for 2.255937375s
	W0408 11:01:22.029547   10268 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:22.047065   10268 out.go:177] * Deleting "kubenet-363000" in qemu2 ...
	W0408 11:01:22.076828   10268 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:22.076856   10268 start.go:728] Will try again in 5 seconds ...
	I0408 11:01:27.078434   10268 start.go:360] acquireMachinesLock for kubenet-363000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:27.078592   10268 start.go:364] duration metric: took 116.208µs to acquireMachinesLock for "kubenet-363000"
	I0408 11:01:27.078608   10268 start.go:93] Provisioning new machine with config: &{Name:kubenet-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:27.078661   10268 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:27.086862   10268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 11:01:27.101841   10268 start.go:159] libmachine.API.Create for "kubenet-363000" (driver="qemu2")
	I0408 11:01:27.101867   10268 client.go:168] LocalClient.Create starting
	I0408 11:01:27.101940   10268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:27.101973   10268 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:27.101985   10268 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:27.102019   10268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:27.102045   10268 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:27.102057   10268 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:27.102434   10268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:27.252381   10268 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:27.482368   10268 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:27.482382   10268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:27.482678   10268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2
	I0408 11:01:27.496238   10268 main.go:141] libmachine: STDOUT: 
	I0408 11:01:27.496260   10268 main.go:141] libmachine: STDERR: 
	I0408 11:01:27.496332   10268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2 +20000M
	I0408 11:01:27.507421   10268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:27.507443   10268 main.go:141] libmachine: STDERR: 
	I0408 11:01:27.507456   10268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2
	I0408 11:01:27.507461   10268 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:27.507505   10268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e6:c5:90:fd:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/kubenet-363000/disk.qcow2
	I0408 11:01:27.509355   10268 main.go:141] libmachine: STDOUT: 
	I0408 11:01:27.509373   10268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:27.509387   10268 client.go:171] duration metric: took 407.514834ms to LocalClient.Create
	I0408 11:01:29.511629   10268 start.go:128] duration metric: took 2.432919375s to createHost
	I0408 11:01:29.511719   10268 start.go:83] releasing machines lock for "kubenet-363000", held for 2.433099417s
	W0408 11:01:29.512058   10268 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:29.522794   10268 out.go:177] 
	W0408 11:01:29.530990   10268 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:01:29.531040   10268 out.go:239] * 
	* 
	W0408 11:01:29.533866   10268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:01:29.543827   10268 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-522000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-522000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.779204584s)

                                                
                                                
-- stdout --
	* [old-k8s-version-522000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-522000" primary control-plane node in "old-k8s-version-522000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-522000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:01:31.842630   10379 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:01:31.842861   10379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:31.842880   10379 out.go:304] Setting ErrFile to fd 2...
	I0408 11:01:31.842883   10379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:31.843134   10379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:01:31.844503   10379 out.go:298] Setting JSON to false
	I0408 11:01:31.861350   10379 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7261,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:01:31.861416   10379 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:01:31.867510   10379 out.go:177] * [old-k8s-version-522000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:01:31.876346   10379 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:01:31.879286   10379 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:01:31.876388   10379 notify.go:220] Checking for updates...
	I0408 11:01:31.885312   10379 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:01:31.886453   10379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:01:31.889331   10379 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:01:31.892315   10379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:01:31.895636   10379 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:01:31.895704   10379 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:01:31.895748   10379 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:01:31.900299   10379 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:01:31.907338   10379 start.go:297] selected driver: qemu2
	I0408 11:01:31.907343   10379 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:01:31.907348   10379 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:01:31.909730   10379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:01:31.913296   10379 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:01:31.916372   10379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:01:31.916407   10379 cni.go:84] Creating CNI manager for ""
	I0408 11:01:31.916415   10379 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 11:01:31.916450   10379 start.go:340] cluster config:
	{Name:old-k8s-version-522000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:01:31.921138   10379 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:31.928262   10379 out.go:177] * Starting "old-k8s-version-522000" primary control-plane node in "old-k8s-version-522000" cluster
	I0408 11:01:31.932305   10379 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 11:01:31.932318   10379 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 11:01:31.932325   10379 cache.go:56] Caching tarball of preloaded images
	I0408 11:01:31.932377   10379 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:01:31.932382   10379 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 11:01:31.932438   10379 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/old-k8s-version-522000/config.json ...
	I0408 11:01:31.932451   10379 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/old-k8s-version-522000/config.json: {Name:mkfc23b6ed367deb647378caf45be8c08732b430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:01:31.932661   10379 start.go:360] acquireMachinesLock for old-k8s-version-522000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:31.932693   10379 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "old-k8s-version-522000"
	I0408 11:01:31.932704   10379 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:31.932729   10379 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:31.941333   10379 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:01:31.958278   10379 start.go:159] libmachine.API.Create for "old-k8s-version-522000" (driver="qemu2")
	I0408 11:01:31.958306   10379 client.go:168] LocalClient.Create starting
	I0408 11:01:31.958374   10379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:31.958403   10379 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:31.958414   10379 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:31.958455   10379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:31.958482   10379 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:31.958489   10379 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:31.958848   10379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:32.112494   10379 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:32.154329   10379 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:32.154339   10379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:32.154595   10379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:32.166768   10379 main.go:141] libmachine: STDOUT: 
	I0408 11:01:32.166789   10379 main.go:141] libmachine: STDERR: 
	I0408 11:01:32.166856   10379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2 +20000M
	I0408 11:01:32.177620   10379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:32.177650   10379 main.go:141] libmachine: STDERR: 
	I0408 11:01:32.177668   10379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:32.177673   10379 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:32.177700   10379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c5:28:21:a4:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:32.179434   10379 main.go:141] libmachine: STDOUT: 
	I0408 11:01:32.179453   10379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:32.179473   10379 client.go:171] duration metric: took 221.15825ms to LocalClient.Create
	I0408 11:01:34.181754   10379 start.go:128] duration metric: took 2.248979375s to createHost
	I0408 11:01:34.181829   10379 start.go:83] releasing machines lock for "old-k8s-version-522000", held for 2.249112291s
	W0408 11:01:34.181942   10379 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:34.192271   10379 out.go:177] * Deleting "old-k8s-version-522000" in qemu2 ...
	W0408 11:01:34.223230   10379 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:34.223264   10379 start.go:728] Will try again in 5 seconds ...
	I0408 11:01:39.225482   10379 start.go:360] acquireMachinesLock for old-k8s-version-522000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:39.225815   10379 start.go:364] duration metric: took 252.375µs to acquireMachinesLock for "old-k8s-version-522000"
	I0408 11:01:39.225895   10379 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:39.226018   10379 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:39.234397   10379 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:01:39.267065   10379 start.go:159] libmachine.API.Create for "old-k8s-version-522000" (driver="qemu2")
	I0408 11:01:39.267112   10379 client.go:168] LocalClient.Create starting
	I0408 11:01:39.267214   10379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:39.267277   10379 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:39.267292   10379 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:39.267350   10379 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:39.267387   10379 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:39.267397   10379 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:39.267887   10379 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:39.424430   10379 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:39.525103   10379 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:39.525109   10379 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:39.525345   10379 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:39.538013   10379 main.go:141] libmachine: STDOUT: 
	I0408 11:01:39.538034   10379 main.go:141] libmachine: STDERR: 
	I0408 11:01:39.538095   10379 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2 +20000M
	I0408 11:01:39.548939   10379 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:39.548954   10379 main.go:141] libmachine: STDERR: 
	I0408 11:01:39.548966   10379 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:39.548970   10379 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:39.549014   10379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7a:96:9b:2a:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:39.550745   10379 main.go:141] libmachine: STDOUT: 
	I0408 11:01:39.550760   10379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:39.550773   10379 client.go:171] duration metric: took 283.653708ms to LocalClient.Create
	I0408 11:01:41.552984   10379 start.go:128] duration metric: took 2.326911792s to createHost
	I0408 11:01:41.553061   10379 start.go:83] releasing machines lock for "old-k8s-version-522000", held for 2.327216333s
	W0408 11:01:41.553518   10379 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:41.562114   10379 out.go:177] 
	W0408 11:01:41.565254   10379 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:01:41.565287   10379 out.go:239] * 
	* 
	W0408 11:01:41.567161   10379 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:01:41.581109   10379 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-522000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (70.056625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-522000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-522000 create -f testdata/busybox.yaml: exit status 1 (29.808333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-522000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-522000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (31.3785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (32.529209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-522000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-522000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-522000 describe deploy/metrics-server -n kube-system: exit status 1 (27.665333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-522000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-522000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (31.117667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-522000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-522000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.190332875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-522000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-522000" primary control-plane node in "old-k8s-version-522000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-522000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-522000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:01:44.990395   10434 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:01:44.990517   10434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:44.990520   10434 out.go:304] Setting ErrFile to fd 2...
	I0408 11:01:44.990523   10434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:44.990662   10434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:01:44.991686   10434 out.go:298] Setting JSON to false
	I0408 11:01:45.008428   10434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7274,"bootTime":1712592030,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:01:45.008488   10434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:01:45.013085   10434 out.go:177] * [old-k8s-version-522000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:01:45.019995   10434 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:01:45.020027   10434 notify.go:220] Checking for updates...
	I0408 11:01:45.024028   10434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:01:45.026986   10434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:01:45.029944   10434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:01:45.032958   10434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:01:45.036015   10434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:01:45.039363   10434 config.go:182] Loaded profile config "old-k8s-version-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0408 11:01:45.042942   10434 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 11:01:45.046015   10434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:01:45.050932   10434 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 11:01:45.058015   10434 start.go:297] selected driver: qemu2
	I0408 11:01:45.058023   10434 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:01:45.058099   10434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:01:45.060594   10434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:01:45.060645   10434 cni.go:84] Creating CNI manager for ""
	I0408 11:01:45.060653   10434 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 11:01:45.060682   10434 start.go:340] cluster config:
	{Name:old-k8s-version-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-522000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:01:45.065076   10434 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:45.072994   10434 out.go:177] * Starting "old-k8s-version-522000" primary control-plane node in "old-k8s-version-522000" cluster
	I0408 11:01:45.076073   10434 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 11:01:45.076092   10434 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 11:01:45.076100   10434 cache.go:56] Caching tarball of preloaded images
	I0408 11:01:45.076173   10434 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:01:45.076179   10434 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 11:01:45.076255   10434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/old-k8s-version-522000/config.json ...
	I0408 11:01:45.076800   10434 start.go:360] acquireMachinesLock for old-k8s-version-522000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:45.076828   10434 start.go:364] duration metric: took 22µs to acquireMachinesLock for "old-k8s-version-522000"
	I0408 11:01:45.076837   10434 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:01:45.076842   10434 fix.go:54] fixHost starting: 
	I0408 11:01:45.076963   10434 fix.go:112] recreateIfNeeded on old-k8s-version-522000: state=Stopped err=<nil>
	W0408 11:01:45.076972   10434 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:01:45.080018   10434 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-522000" ...
	I0408 11:01:45.086969   10434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7a:96:9b:2a:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:45.089015   10434 main.go:141] libmachine: STDOUT: 
	I0408 11:01:45.089033   10434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:45.089065   10434 fix.go:56] duration metric: took 12.220542ms for fixHost
	I0408 11:01:45.089070   10434 start.go:83] releasing machines lock for "old-k8s-version-522000", held for 12.237625ms
	W0408 11:01:45.089079   10434 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:01:45.089108   10434 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:45.089113   10434 start.go:728] Will try again in 5 seconds ...
	I0408 11:01:50.091326   10434 start.go:360] acquireMachinesLock for old-k8s-version-522000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:50.091942   10434 start.go:364] duration metric: took 516.833µs to acquireMachinesLock for "old-k8s-version-522000"
	I0408 11:01:50.092100   10434 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:01:50.092121   10434 fix.go:54] fixHost starting: 
	I0408 11:01:50.092872   10434 fix.go:112] recreateIfNeeded on old-k8s-version-522000: state=Stopped err=<nil>
	W0408 11:01:50.092900   10434 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:01:50.102455   10434 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-522000" ...
	I0408 11:01:50.105540   10434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7a:96:9b:2a:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/old-k8s-version-522000/disk.qcow2
	I0408 11:01:50.115111   10434 main.go:141] libmachine: STDOUT: 
	I0408 11:01:50.115172   10434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:50.115264   10434 fix.go:56] duration metric: took 23.144833ms for fixHost
	I0408 11:01:50.115284   10434 start.go:83] releasing machines lock for "old-k8s-version-522000", held for 23.31675ms
	W0408 11:01:50.115489   10434 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-522000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-522000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:50.123432   10434 out.go:177] 
	W0408 11:01:50.127521   10434 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:01:50.127551   10434 out.go:239] * 
	* 
	W0408 11:01:50.129271   10434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:01:50.140423   10434 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-522000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (56.051708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-522000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (33.107833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-522000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-522000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-522000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.17ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-522000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-522000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (31.494958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-522000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (31.248542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-522000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-522000 --alsologtostderr -v=1: exit status 83 (53.4485ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-522000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-522000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:01:50.405621   10453 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:01:50.406579   10453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:50.406590   10453 out.go:304] Setting ErrFile to fd 2...
	I0408 11:01:50.406593   10453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:50.406750   10453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:01:50.406960   10453 out.go:298] Setting JSON to false
	I0408 11:01:50.406969   10453 mustload.go:65] Loading cluster: old-k8s-version-522000
	I0408 11:01:50.407174   10453 config.go:182] Loaded profile config "old-k8s-version-522000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0408 11:01:50.411945   10453 out.go:177] * The control-plane node old-k8s-version-522000 host is not running: state=Stopped
	I0408 11:01:50.419852   10453 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-522000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-522000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (31.586541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (32.942959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-522000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.1: exit status 80 (9.913855916s)

                                                
                                                
-- stdout --
	* [no-preload-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-042000" primary control-plane node in "no-preload-042000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-042000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:01:50.903094   10476 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:01:50.903246   10476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:50.903252   10476 out.go:304] Setting ErrFile to fd 2...
	I0408 11:01:50.903254   10476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:01:50.903386   10476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:01:50.904532   10476 out.go:298] Setting JSON to false
	I0408 11:01:50.921009   10476 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7280,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:01:50.921082   10476 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:01:50.925009   10476 out.go:177] * [no-preload-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:01:50.932898   10476 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:01:50.936885   10476 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:01:50.932939   10476 notify.go:220] Checking for updates...
	I0408 11:01:50.942846   10476 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:01:50.945883   10476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:01:50.948896   10476 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:01:50.951888   10476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:01:50.955175   10476 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:01:50.955233   10476 config.go:182] Loaded profile config "stopped-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0408 11:01:50.955278   10476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:01:50.959869   10476 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:01:50.966884   10476 start.go:297] selected driver: qemu2
	I0408 11:01:50.966890   10476 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:01:50.966897   10476 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:01:50.969069   10476 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:01:50.971914   10476 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:01:50.974924   10476 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:01:50.974962   10476 cni.go:84] Creating CNI manager for ""
	I0408 11:01:50.974968   10476 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:01:50.974972   10476 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:01:50.975002   10476 start.go:340] cluster config:
	{Name:no-preload-042000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:01:50.979275   10476 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.987907   10476 out.go:177] * Starting "no-preload-042000" primary control-plane node in "no-preload-042000" cluster
	I0408 11:01:50.991861   10476 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 11:01:50.991923   10476 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/no-preload-042000/config.json ...
	I0408 11:01:50.991937   10476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/no-preload-042000/config.json: {Name:mke004ccc62e748a9abb4787ae6cd92dc4fdd4c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:01:50.991934   10476 cache.go:107] acquiring lock: {Name:mk85baeb762137470497570e9296584c4f360ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.991943   10476 cache.go:107] acquiring lock: {Name:mked2bf2275b12767ac4144034079225e76f900a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.991942   10476 cache.go:107] acquiring lock: {Name:mk04efc435e19465f1f0dd11028b1f39579f1719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.991989   10476 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0408 11:01:50.991994   10476 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.542µs
	I0408 11:01:50.991999   10476 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0408 11:01:50.992007   10476 cache.go:107] acquiring lock: {Name:mk6c064af565aa9cb9a572ea1ee4a36e88fb1291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.992105   10476 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0408 11:01:50.992126   10476 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0408 11:01:50.992120   10476 cache.go:107] acquiring lock: {Name:mkc61bc6b8a6e66208ccb5235537c499a9aa6956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.992167   10476 start.go:360] acquireMachinesLock for no-preload-042000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:50.992194   10476 start.go:364] duration metric: took 21.792µs to acquireMachinesLock for "no-preload-042000"
	I0408 11:01:50.992180   10476 cache.go:107] acquiring lock: {Name:mk69d8c268c08c3343a2c15182713afbcf021f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.992238   10476 cache.go:107] acquiring lock: {Name:mk9fba5969d4de84f7608924eba80af4e865ae4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.992246   10476 cache.go:107] acquiring lock: {Name:mk9f7b4306e06e123370817da67993f062da4f7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:01:50.992105   10476 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 11:01:50.992290   10476 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0408 11:01:50.992203   10476 start.go:93] Provisioning new machine with config: &{Name:no-preload-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:50.992330   10476 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:50.992336   10476 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0408 11:01:51.000901   10476 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:01:50.992379   10476 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 11:01:50.992396   10476 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 11:01:51.005847   10476 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 11:01:51.005871   10476 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.1
	I0408 11:01:51.006656   10476 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.1
	I0408 11:01:51.008930   10476 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 11:01:51.008970   10476 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 11:01:51.008981   10476 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.1
	I0408 11:01:51.009005   10476 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.1
	I0408 11:01:51.016241   10476 start.go:159] libmachine.API.Create for "no-preload-042000" (driver="qemu2")
	I0408 11:01:51.016265   10476 client.go:168] LocalClient.Create starting
	I0408 11:01:51.016351   10476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:51.016377   10476 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:51.016385   10476 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:51.016420   10476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:51.016440   10476 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:51.016458   10476 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:51.016789   10476 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:51.228068   10476 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:51.314093   10476 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:51.314112   10476 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:51.314355   10476 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:01:51.335292   10476 main.go:141] libmachine: STDOUT: 
	I0408 11:01:51.335309   10476 main.go:141] libmachine: STDERR: 
	I0408 11:01:51.335406   10476 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2 +20000M
	I0408 11:01:51.354462   10476 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:51.354479   10476 main.go:141] libmachine: STDERR: 
	I0408 11:01:51.354488   10476 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:01:51.354495   10476 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:51.354516   10476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4a:b6:90:5c:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:01:51.356616   10476 main.go:141] libmachine: STDOUT: 
	I0408 11:01:51.356640   10476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:51.356659   10476 client.go:171] duration metric: took 340.386ms to LocalClient.Create
	I0408 11:01:51.434405   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1
	I0408 11:01:51.439328   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0408 11:01:51.461552   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1
	I0408 11:01:51.476396   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1
	I0408 11:01:51.478165   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1
	I0408 11:01:51.481784   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0408 11:01:51.487627   10476 cache.go:162] opening:  /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 11:01:51.652318   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0408 11:01:51.652334   10476 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 660.3225ms
	I0408 11:01:51.652340   10476 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0408 11:01:53.356936   10476 start.go:128] duration metric: took 2.364581917s to createHost
	I0408 11:01:53.356969   10476 start.go:83] releasing machines lock for "no-preload-042000", held for 2.364757875s
	W0408 11:01:53.356991   10476 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:53.366508   10476 out.go:177] * Deleting "no-preload-042000" in qemu2 ...
	W0408 11:01:53.376547   10476 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:01:53.376554   10476 start.go:728] Will try again in 5 seconds ...
	I0408 11:01:53.744491   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0408 11:01:53.744512   10476 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.752398s
	I0408 11:01:53.744521   10476 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0408 11:01:53.918255   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 exists
	I0408 11:01:53.918288   10476 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1" took 2.926041875s
	I0408 11:01:53.918300   10476 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 succeeded
	I0408 11:01:54.148899   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 exists
	I0408 11:01:54.148924   10476 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1" took 3.156961333s
	I0408 11:01:54.148936   10476 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 succeeded
	I0408 11:01:55.084977   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 exists
	I0408 11:01:55.085015   10476 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1" took 4.092889333s
	I0408 11:01:55.085034   10476 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 succeeded
	I0408 11:01:56.586855   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 exists
	I0408 11:01:56.586890   10476 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1" took 5.594920292s
	I0408 11:01:56.586898   10476 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 succeeded
	I0408 11:01:58.376751   10476 start.go:360] acquireMachinesLock for no-preload-042000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:01:58.377234   10476 start.go:364] duration metric: took 406.875µs to acquireMachinesLock for "no-preload-042000"
	I0408 11:01:58.377370   10476 start.go:93] Provisioning new machine with config: &{Name:no-preload-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:01:58.377608   10476 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:01:58.389244   10476 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:01:58.438429   10476 start.go:159] libmachine.API.Create for "no-preload-042000" (driver="qemu2")
	I0408 11:01:58.438498   10476 client.go:168] LocalClient.Create starting
	I0408 11:01:58.438624   10476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:01:58.438697   10476 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:58.438713   10476 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:58.438804   10476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:01:58.438847   10476 main.go:141] libmachine: Decoding PEM data...
	I0408 11:01:58.438868   10476 main.go:141] libmachine: Parsing certificate...
	I0408 11:01:58.439360   10476 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:01:58.602911   10476 main.go:141] libmachine: Creating SSH key...
	I0408 11:01:58.712215   10476 main.go:141] libmachine: Creating Disk image...
	I0408 11:01:58.712223   10476 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:01:58.712499   10476 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:01:58.725829   10476 main.go:141] libmachine: STDOUT: 
	I0408 11:01:58.725858   10476 main.go:141] libmachine: STDERR: 
	I0408 11:01:58.725921   10476 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2 +20000M
	I0408 11:01:58.737709   10476 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:01:58.737746   10476 main.go:141] libmachine: STDERR: 
	I0408 11:01:58.737797   10476 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:01:58.737802   10476 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:01:58.737844   10476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:56:8f:51:93:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:01:58.739785   10476 main.go:141] libmachine: STDOUT: 
	I0408 11:01:58.739802   10476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:01:58.739817   10476 client.go:171] duration metric: took 301.310833ms to LocalClient.Create
	I0408 11:01:59.514252   10476 cache.go:157] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0408 11:01:59.514310   10476 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.522176166s
	I0408 11:01:59.514327   10476 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0408 11:01:59.514361   10476 cache.go:87] Successfully saved all images to host disk.
	I0408 11:02:00.741100   10476 start.go:128] duration metric: took 2.363394291s to createHost
	I0408 11:02:00.741238   10476 start.go:83] releasing machines lock for "no-preload-042000", held for 2.363962875s
	W0408 11:02:00.741612   10476 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:00.750143   10476 out.go:177] 
	W0408 11:02:00.760276   10476 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:00.760401   10476 out.go:239] * 
	* 
	W0408 11:02:00.762741   10476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:00.773167   10476 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (60.777209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-042000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-042000 create -f testdata/busybox.yaml: exit status 1 (28.929041ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-042000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-042000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (31.966875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (31.950292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-042000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-042000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-042000 describe deploy/metrics-server -n kube-system: exit status 1 (27.412792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-042000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-042000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (31.477791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.1: exit status 80 (7.156566333s)

                                                
                                                
-- stdout --
	* [no-preload-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-042000" primary control-plane node in "no-preload-042000" cluster
	* Restarting existing qemu2 VM for "no-preload-042000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-042000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:03.181661   10551 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:03.181805   10551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:03.181808   10551 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:03.181811   10551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:03.181948   10551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:03.183015   10551 out.go:298] Setting JSON to false
	I0408 11:02:03.199849   10551 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7293,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:03.199927   10551 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:03.203900   10551 out.go:177] * [no-preload-042000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:03.220901   10551 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:03.215026   10551 notify.go:220] Checking for updates...
	I0408 11:02:03.227844   10551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:03.231892   10551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:03.235863   10551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:03.239819   10551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:03.242839   10551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:03.246178   10551 config.go:182] Loaded profile config "no-preload-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.1
	I0408 11:02:03.246436   10551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:03.250802   10551 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 11:02:03.257881   10551 start.go:297] selected driver: qemu2
	I0408 11:02:03.257887   10551 start.go:901] validating driver "qemu2" against &{Name:no-preload-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-042000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:03.257940   10551 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:03.260271   10551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:02:03.260321   10551 cni.go:84] Creating CNI manager for ""
	I0408 11:02:03.260328   10551 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:03.260352   10551 start.go:340] cluster config:
	{Name:no-preload-042000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:no-preload-042000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:03.264662   10551 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.271870   10551 out.go:177] * Starting "no-preload-042000" primary control-plane node in "no-preload-042000" cluster
	I0408 11:02:03.275737   10551 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 11:02:03.275791   10551 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/no-preload-042000/config.json ...
	I0408 11:02:03.275813   10551 cache.go:107] acquiring lock: {Name:mk04efc435e19465f1f0dd11028b1f39579f1719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275830   10551 cache.go:107] acquiring lock: {Name:mk9fba5969d4de84f7608924eba80af4e865ae4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275836   10551 cache.go:107] acquiring lock: {Name:mkc61bc6b8a6e66208ccb5235537c499a9aa6956 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275877   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 exists
	I0408 11:02:03.275877   10551 cache.go:107] acquiring lock: {Name:mk6c064af565aa9cb9a572ea1ee4a36e88fb1291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275881   10551 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1" took 51.333µs
	I0408 11:02:03.275890   10551 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-rc.1 succeeded
	I0408 11:02:03.275893   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 exists
	I0408 11:02:03.275897   10551 cache.go:107] acquiring lock: {Name:mk69d8c268c08c3343a2c15182713afbcf021f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275905   10551 cache.go:107] acquiring lock: {Name:mked2bf2275b12767ac4144034079225e76f900a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275931   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0408 11:02:03.275819   10551 cache.go:107] acquiring lock: {Name:mk85baeb762137470497570e9296584c4f360ef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275899   10551 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1" took 78.792µs
	I0408 11:02:03.275978   10551 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.1 succeeded
	I0408 11:02:03.275934   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0408 11:02:03.275987   10551 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 124.5µs
	I0408 11:02:03.275990   10551 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0408 11:02:03.275939   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 exists
	I0408 11:02:03.276016   10551 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1" took 110.916µs
	I0408 11:02:03.276020   10551 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-rc.1 succeeded
	I0408 11:02:03.275988   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 exists
	I0408 11:02:03.276024   10551 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-rc.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1" took 214µs
	I0408 11:02:03.276028   10551 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-rc.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-rc.1 succeeded
	I0408 11:02:03.275940   10551 cache.go:107] acquiring lock: {Name:mk9f7b4306e06e123370817da67993f062da4f7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:03.275951   10551 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 53.833µs
	I0408 11:02:03.276043   10551 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0408 11:02:03.275983   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0408 11:02:03.276046   10551 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 229.25µs
	I0408 11:02:03.276050   10551 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0408 11:02:03.276058   10551 cache.go:115] /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0408 11:02:03.276063   10551 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 123.667µs
	I0408 11:02:03.276066   10551 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0408 11:02:03.276070   10551 cache.go:87] Successfully saved all images to host disk.
	I0408 11:02:03.276169   10551 start.go:360] acquireMachinesLock for no-preload-042000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:03.276196   10551 start.go:364] duration metric: took 21.417µs to acquireMachinesLock for "no-preload-042000"
	I0408 11:02:03.276204   10551 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:03.276210   10551 fix.go:54] fixHost starting: 
	I0408 11:02:03.276312   10551 fix.go:112] recreateIfNeeded on no-preload-042000: state=Stopped err=<nil>
	W0408 11:02:03.276320   10551 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:03.283720   10551 out.go:177] * Restarting existing qemu2 VM for "no-preload-042000" ...
	I0408 11:02:03.287899   10551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:56:8f:51:93:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:02:03.289985   10551 main.go:141] libmachine: STDOUT: 
	I0408 11:02:03.290006   10551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:03.290034   10551 fix.go:56] duration metric: took 13.822417ms for fixHost
	I0408 11:02:03.290039   10551 start.go:83] releasing machines lock for "no-preload-042000", held for 13.838667ms
	W0408 11:02:03.290045   10551 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:03.290089   10551 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:03.290093   10551 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:08.292254   10551 start.go:360] acquireMachinesLock for no-preload-042000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:10.233951   10551 start.go:364] duration metric: took 1.941636458s to acquireMachinesLock for "no-preload-042000"
	I0408 11:02:10.234126   10551 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:10.234152   10551 fix.go:54] fixHost starting: 
	I0408 11:02:10.234905   10551 fix.go:112] recreateIfNeeded on no-preload-042000: state=Stopped err=<nil>
	W0408 11:02:10.234934   10551 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:10.250498   10551 out.go:177] * Restarting existing qemu2 VM for "no-preload-042000" ...
	I0408 11:02:10.262714   10551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:56:8f:51:93:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/no-preload-042000/disk.qcow2
	I0408 11:02:10.272864   10551 main.go:141] libmachine: STDOUT: 
	I0408 11:02:10.273005   10551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:10.273092   10551 fix.go:56] duration metric: took 38.946417ms for fixHost
	I0408 11:02:10.273111   10551 start.go:83] releasing machines lock for "no-preload-042000", held for 39.128875ms
	W0408 11:02:10.273278   10551 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-042000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-042000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:10.281509   10551 out.go:177] 
	W0408 11:02:10.284563   10551 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:10.284599   10551 out.go:239] * 
	* 
	W0408 11:02:10.286333   10551 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:10.295406   10551 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (58.244083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-956000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-956000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (10.087209625s)

                                                
                                                
-- stdout --
	* [embed-certs-956000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-956000" primary control-plane node in "embed-certs-956000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-956000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:07.659070   10562 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:07.659209   10562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:07.659212   10562 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:07.659215   10562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:07.659333   10562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:07.660435   10562 out.go:298] Setting JSON to false
	I0408 11:02:07.676798   10562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7297,"bootTime":1712592030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:07.676862   10562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:07.680731   10562 out.go:177] * [embed-certs-956000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:07.687710   10562 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:07.687754   10562 notify.go:220] Checking for updates...
	I0408 11:02:07.691708   10562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:07.694753   10562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:07.697627   10562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:07.700684   10562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:07.703735   10562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:07.707049   10562 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:07.707123   10562 config.go:182] Loaded profile config "no-preload-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.1
	I0408 11:02:07.707184   10562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:07.711669   10562 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:02:07.718650   10562 start.go:297] selected driver: qemu2
	I0408 11:02:07.718657   10562 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:02:07.718662   10562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:07.720991   10562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:02:07.724746   10562 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:02:07.727807   10562 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:02:07.727849   10562 cni.go:84] Creating CNI manager for ""
	I0408 11:02:07.727856   10562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:07.727860   10562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:02:07.727903   10562 start.go:340] cluster config:
	{Name:embed-certs-956000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:07.732556   10562 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:07.740669   10562 out.go:177] * Starting "embed-certs-956000" primary control-plane node in "embed-certs-956000" cluster
	I0408 11:02:07.744560   10562 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:02:07.744580   10562 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:02:07.744586   10562 cache.go:56] Caching tarball of preloaded images
	I0408 11:02:07.744664   10562 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:02:07.744670   10562 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:02:07.744740   10562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/embed-certs-956000/config.json ...
	I0408 11:02:07.744759   10562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/embed-certs-956000/config.json: {Name:mke1eef68cf9b9c85b79e4bd38cf290a19159a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:02:07.744977   10562 start.go:360] acquireMachinesLock for embed-certs-956000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:07.745008   10562 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "embed-certs-956000"
	I0408 11:02:07.745019   10562 start.go:93] Provisioning new machine with config: &{Name:embed-certs-956000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:02:07.745058   10562 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:02:07.752714   10562 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:02:07.770216   10562 start.go:159] libmachine.API.Create for "embed-certs-956000" (driver="qemu2")
	I0408 11:02:07.770243   10562 client.go:168] LocalClient.Create starting
	I0408 11:02:07.770315   10562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:02:07.770346   10562 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:07.770359   10562 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:07.770395   10562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:02:07.770418   10562 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:07.770425   10562 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:07.770774   10562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:02:07.970165   10562 main.go:141] libmachine: Creating SSH key...
	I0408 11:02:08.204987   10562 main.go:141] libmachine: Creating Disk image...
	I0408 11:02:08.205000   10562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:02:08.205532   10562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:08.218908   10562 main.go:141] libmachine: STDOUT: 
	I0408 11:02:08.218938   10562 main.go:141] libmachine: STDERR: 
	I0408 11:02:08.219000   10562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2 +20000M
	I0408 11:02:08.229791   10562 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:02:08.229809   10562 main.go:141] libmachine: STDERR: 
	I0408 11:02:08.229822   10562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:08.229829   10562 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:02:08.229865   10562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:6a:95:4d:45:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:08.231559   10562 main.go:141] libmachine: STDOUT: 
	I0408 11:02:08.231578   10562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:08.231599   10562 client.go:171] duration metric: took 461.347625ms to LocalClient.Create
	I0408 11:02:10.233779   10562 start.go:128] duration metric: took 2.488683875s to createHost
	I0408 11:02:10.233829   10562 start.go:83] releasing machines lock for "embed-certs-956000", held for 2.488797042s
	W0408 11:02:10.233905   10562 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:10.258469   10562 out.go:177] * Deleting "embed-certs-956000" in qemu2 ...
	W0408 11:02:10.319473   10562 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:10.319493   10562 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:15.321741   10562 start.go:360] acquireMachinesLock for embed-certs-956000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:15.322277   10562 start.go:364] duration metric: took 340.208µs to acquireMachinesLock for "embed-certs-956000"
	I0408 11:02:15.322458   10562 start.go:93] Provisioning new machine with config: &{Name:embed-certs-956000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:02:15.322796   10562 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:02:15.340458   10562 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:02:15.388663   10562 start.go:159] libmachine.API.Create for "embed-certs-956000" (driver="qemu2")
	I0408 11:02:15.388716   10562 client.go:168] LocalClient.Create starting
	I0408 11:02:15.388834   10562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:02:15.388896   10562 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:15.388916   10562 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:15.389002   10562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:02:15.389045   10562 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:15.389055   10562 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:15.389571   10562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:02:15.554245   10562 main.go:141] libmachine: Creating SSH key...
	I0408 11:02:15.644349   10562 main.go:141] libmachine: Creating Disk image...
	I0408 11:02:15.644355   10562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:02:15.644587   10562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:15.656957   10562 main.go:141] libmachine: STDOUT: 
	I0408 11:02:15.656992   10562 main.go:141] libmachine: STDERR: 
	I0408 11:02:15.657046   10562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2 +20000M
	I0408 11:02:15.667723   10562 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:02:15.667747   10562 main.go:141] libmachine: STDERR: 
	I0408 11:02:15.667760   10562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:15.667765   10562 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:02:15.667821   10562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:8e:0b:a1:16:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:15.669596   10562 main.go:141] libmachine: STDOUT: 
	I0408 11:02:15.669614   10562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:15.669627   10562 client.go:171] duration metric: took 280.904041ms to LocalClient.Create
	I0408 11:02:17.671812   10562 start.go:128] duration metric: took 2.348972292s to createHost
	I0408 11:02:17.671926   10562 start.go:83] releasing machines lock for "embed-certs-956000", held for 2.349510208s
	W0408 11:02:17.672267   10562 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-956000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-956000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:17.685098   10562 out.go:177] 
	W0408 11:02:17.690067   10562 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:17.690112   10562 out.go:239] * 
	* 
	W0408 11:02:17.692619   10562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:17.700032   10562 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-956000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (67.52375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-042000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (33.169916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-042000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-042000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-042000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.978708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-042000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-042000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (31.34ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-042000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-rc.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.30.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (31.023833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-042000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-042000 --alsologtostderr -v=1: exit status 83 (51.306541ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-042000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-042000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:10.562831   10584 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:10.563025   10584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:10.563028   10584 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:10.563030   10584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:10.563152   10584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:10.563358   10584 out.go:298] Setting JSON to false
	I0408 11:02:10.563365   10584 mustload.go:65] Loading cluster: no-preload-042000
	I0408 11:02:10.563560   10584 config.go:182] Loaded profile config "no-preload-042000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.1
	I0408 11:02:10.568244   10584 out.go:177] * The control-plane node no-preload-042000 host is not running: state=Stopped
	I0408 11:02:10.579419   10584 out.go:177]   To start a cluster, run: "minikube start -p no-preload-042000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-042000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (30.890208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (30.881333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.936757375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-664000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-664000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:11.287525   10619 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:11.287728   10619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:11.287734   10619 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:11.287737   10619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:11.287854   10619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:11.288990   10619 out.go:298] Setting JSON to false
	I0408 11:02:11.305341   10619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7301,"bootTime":1712592030,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:11.305408   10619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:11.310455   10619 out.go:177] * [default-k8s-diff-port-664000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:11.318402   10619 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:11.318450   10619 notify.go:220] Checking for updates...
	I0408 11:02:11.325426   10619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:11.328455   10619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:11.331390   10619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:11.334413   10619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:11.337459   10619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:11.340674   10619 config.go:182] Loaded profile config "embed-certs-956000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:11.340741   10619 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:11.340785   10619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:11.345384   10619 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:02:11.352359   10619 start.go:297] selected driver: qemu2
	I0408 11:02:11.352367   10619 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:02:11.352376   10619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:11.354827   10619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:02:11.358467   10619 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:02:11.361524   10619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:02:11.361583   10619 cni.go:84] Creating CNI manager for ""
	I0408 11:02:11.361592   10619 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:11.361596   10619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:02:11.361620   10619 start.go:340] cluster config:
	{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:11.366248   10619 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:11.374240   10619 out.go:177] * Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	I0408 11:02:11.378406   10619 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:02:11.378424   10619 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:02:11.378435   10619 cache.go:56] Caching tarball of preloaded images
	I0408 11:02:11.378492   10619 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:02:11.378497   10619 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:02:11.378564   10619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/default-k8s-diff-port-664000/config.json ...
	I0408 11:02:11.378577   10619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/default-k8s-diff-port-664000/config.json: {Name:mk7bf7bd53646250facd5bbcec6c9967a5ae049c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:02:11.378812   10619 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:11.378848   10619 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0408 11:02:11.378864   10619 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:02:11.378902   10619 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:02:11.386372   10619 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:02:11.403565   10619 start.go:159] libmachine.API.Create for "default-k8s-diff-port-664000" (driver="qemu2")
	I0408 11:02:11.403594   10619 client.go:168] LocalClient.Create starting
	I0408 11:02:11.403651   10619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:02:11.403683   10619 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:11.403697   10619 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:11.403733   10619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:02:11.403760   10619 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:11.403767   10619 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:11.404131   10619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:02:11.574688   10619 main.go:141] libmachine: Creating SSH key...
	I0408 11:02:11.642769   10619 main.go:141] libmachine: Creating Disk image...
	I0408 11:02:11.642774   10619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:02:11.643016   10619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:11.655624   10619 main.go:141] libmachine: STDOUT: 
	I0408 11:02:11.655647   10619 main.go:141] libmachine: STDERR: 
	I0408 11:02:11.655706   10619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2 +20000M
	I0408 11:02:11.666279   10619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:02:11.666293   10619 main.go:141] libmachine: STDERR: 
	I0408 11:02:11.666309   10619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:11.666312   10619 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:02:11.666345   10619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a5:af:60:58:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:11.668043   10619 main.go:141] libmachine: STDOUT: 
	I0408 11:02:11.668059   10619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:11.668079   10619 client.go:171] duration metric: took 264.477458ms to LocalClient.Create
	I0408 11:02:13.670323   10619 start.go:128] duration metric: took 2.291382708s to createHost
	I0408 11:02:13.670415   10619 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 2.291543209s
	W0408 11:02:13.670462   10619 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:13.682679   10619 out.go:177] * Deleting "default-k8s-diff-port-664000" in qemu2 ...
	W0408 11:02:13.717696   10619 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:13.717720   10619 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:18.719936   10619 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:18.720546   10619 start.go:364] duration metric: took 505.334µs to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0408 11:02:18.720769   10619 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:02:18.721024   10619 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:02:18.730732   10619 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:02:18.781124   10619 start.go:159] libmachine.API.Create for "default-k8s-diff-port-664000" (driver="qemu2")
	I0408 11:02:18.781177   10619 client.go:168] LocalClient.Create starting
	I0408 11:02:18.781280   10619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:02:18.781347   10619 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:18.781365   10619 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:18.781438   10619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:02:18.781480   10619 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:18.781494   10619 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:18.782017   10619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:02:18.975071   10619 main.go:141] libmachine: Creating SSH key...
	I0408 11:02:19.117208   10619 main.go:141] libmachine: Creating Disk image...
	I0408 11:02:19.117215   10619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:02:19.117428   10619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:19.130229   10619 main.go:141] libmachine: STDOUT: 
	I0408 11:02:19.130246   10619 main.go:141] libmachine: STDERR: 
	I0408 11:02:19.130307   10619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2 +20000M
	I0408 11:02:19.141195   10619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:02:19.141210   10619 main.go:141] libmachine: STDERR: 
	I0408 11:02:19.141221   10619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:19.141233   10619 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:02:19.141265   10619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:9a:82:e0:14:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:19.142966   10619 main.go:141] libmachine: STDOUT: 
	I0408 11:02:19.142984   10619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:19.143000   10619 client.go:171] duration metric: took 361.816375ms to LocalClient.Create
	I0408 11:02:21.145184   10619 start.go:128] duration metric: took 2.424119291s to createHost
	I0408 11:02:21.145231   10619 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 2.42461875s
	W0408 11:02:21.145414   10619 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:21.157589   10619 out.go:177] 
	W0408 11:02:21.162789   10619 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:21.162823   10619 out.go:239] * 
	* 
	W0408 11:02:21.165499   10619 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:21.176567   10619 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (63.065916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-956000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-956000 create -f testdata/busybox.yaml: exit status 1 (28.902833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-956000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-956000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (29.564667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (30.647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-956000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-956000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-956000 describe deploy/metrics-server -n kube-system: exit status 1 (26.87475ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-956000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-956000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (31.602541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-956000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-956000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (6.218952834s)

                                                
                                                
-- stdout --
	* [embed-certs-956000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-956000" primary control-plane node in "embed-certs-956000" cluster
	* Restarting existing qemu2 VM for "embed-certs-956000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-956000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:20.048873   10667 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:20.049017   10667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:20.049021   10667 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:20.049023   10667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:20.049148   10667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:20.050163   10667 out.go:298] Setting JSON to false
	I0408 11:02:20.066239   10667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7310,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:20.066318   10667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:20.070175   10667 out.go:177] * [embed-certs-956000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:20.078028   10667 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:20.078084   10667 notify.go:220] Checking for updates...
	I0408 11:02:20.081844   10667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:20.084974   10667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:20.088012   10667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:20.091001   10667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:20.094045   10667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:20.097228   10667 config.go:182] Loaded profile config "embed-certs-956000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:20.097499   10667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:20.101987   10667 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 11:02:20.109037   10667 start.go:297] selected driver: qemu2
	I0408 11:02:20.109043   10667 start.go:901] validating driver "qemu2" against &{Name:embed-certs-956000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:embed-certs-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:20.109091   10667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:20.111479   10667 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:02:20.111534   10667 cni.go:84] Creating CNI manager for ""
	I0408 11:02:20.111541   10667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:20.111572   10667 start.go:340] cluster config:
	{Name:embed-certs-956000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-956000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:20.115758   10667 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:20.123980   10667 out.go:177] * Starting "embed-certs-956000" primary control-plane node in "embed-certs-956000" cluster
	I0408 11:02:20.128022   10667 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:02:20.128037   10667 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:02:20.128047   10667 cache.go:56] Caching tarball of preloaded images
	I0408 11:02:20.128100   10667 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:02:20.128108   10667 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:02:20.128179   10667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/embed-certs-956000/config.json ...
	I0408 11:02:20.128722   10667 start.go:360] acquireMachinesLock for embed-certs-956000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:21.145311   10667 start.go:364] duration metric: took 1.016567417s to acquireMachinesLock for "embed-certs-956000"
	I0408 11:02:21.145418   10667 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:21.145439   10667 fix.go:54] fixHost starting: 
	I0408 11:02:21.145832   10667 fix.go:112] recreateIfNeeded on embed-certs-956000: state=Stopped err=<nil>
	W0408 11:02:21.145864   10667 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:21.157591   10667 out.go:177] * Restarting existing qemu2 VM for "embed-certs-956000" ...
	I0408 11:02:21.165904   10667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:8e:0b:a1:16:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:21.175852   10667 main.go:141] libmachine: STDOUT: 
	I0408 11:02:21.175919   10667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:21.176033   10667 fix.go:56] duration metric: took 30.59875ms for fixHost
	I0408 11:02:21.176051   10667 start.go:83] releasing machines lock for "embed-certs-956000", held for 30.706ms
	W0408 11:02:21.176078   10667 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:21.176195   10667 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:21.176214   10667 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:26.178474   10667 start.go:360] acquireMachinesLock for embed-certs-956000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:26.178805   10667 start.go:364] duration metric: took 247.666µs to acquireMachinesLock for "embed-certs-956000"
	I0408 11:02:26.178942   10667 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:26.178962   10667 fix.go:54] fixHost starting: 
	I0408 11:02:26.179796   10667 fix.go:112] recreateIfNeeded on embed-certs-956000: state=Stopped err=<nil>
	W0408 11:02:26.179826   10667 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:26.185439   10667 out.go:177] * Restarting existing qemu2 VM for "embed-certs-956000" ...
	I0408 11:02:26.193575   10667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:8e:0b:a1:16:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/embed-certs-956000/disk.qcow2
	I0408 11:02:26.202840   10667 main.go:141] libmachine: STDOUT: 
	I0408 11:02:26.202903   10667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:26.202982   10667 fix.go:56] duration metric: took 24.022916ms for fixHost
	I0408 11:02:26.202998   10667 start.go:83] releasing machines lock for "embed-certs-956000", held for 24.170458ms
	W0408 11:02:26.203220   10667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-956000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-956000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:26.210305   10667 out.go:177] 
	W0408 11:02:26.214425   10667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:26.214485   10667 out.go:239] * 
	* 
	W0408 11:02:26.217208   10667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:26.224293   10667 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-956000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (65.081875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-664000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-664000 create -f testdata/busybox.yaml: exit status 1 (28.745916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-664000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-664000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (30.752167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (31.011792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-664000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-664000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-664000 describe deploy/metrics-server -n kube-system: exit status 1 (26.918709ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-664000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-664000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (31.309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.188158209s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-664000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:24.975017   10710 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:24.975159   10710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:24.975162   10710 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:24.975165   10710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:24.975306   10710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:24.976273   10710 out.go:298] Setting JSON to false
	I0408 11:02:24.992612   10710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7314,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:24.992669   10710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:24.997535   10710 out.go:177] * [default-k8s-diff-port-664000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:25.005394   10710 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:25.005428   10710 notify.go:220] Checking for updates...
	I0408 11:02:25.012473   10710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:25.015477   10710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:25.018466   10710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:25.021496   10710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:25.023010   10710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:25.026715   10710 config.go:182] Loaded profile config "default-k8s-diff-port-664000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:25.026973   10710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:25.031492   10710 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 11:02:25.037414   10710 start.go:297] selected driver: qemu2
	I0408 11:02:25.037422   10710 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:25.037500   10710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:25.040005   10710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:02:25.040046   10710 cni.go:84] Creating CNI manager for ""
	I0408 11:02:25.040057   10710 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:25.040086   10710 start.go:340] cluster config:
	{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-664000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:25.044427   10710 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:25.052488   10710 out.go:177] * Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	I0408 11:02:25.056512   10710 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 11:02:25.056525   10710 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 11:02:25.056534   10710 cache.go:56] Caching tarball of preloaded images
	I0408 11:02:25.056585   10710 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:02:25.056591   10710 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 11:02:25.056657   10710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/default-k8s-diff-port-664000/config.json ...
	I0408 11:02:25.057152   10710 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:25.057179   10710 start.go:364] duration metric: took 20.292µs to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0408 11:02:25.057187   10710 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:25.057192   10710 fix.go:54] fixHost starting: 
	I0408 11:02:25.057309   10710 fix.go:112] recreateIfNeeded on default-k8s-diff-port-664000: state=Stopped err=<nil>
	W0408 11:02:25.057319   10710 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:25.061494   10710 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	I0408 11:02:25.068501   10710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:9a:82:e0:14:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:25.070625   10710 main.go:141] libmachine: STDOUT: 
	I0408 11:02:25.070647   10710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:25.070675   10710 fix.go:56] duration metric: took 13.4815ms for fixHost
	I0408 11:02:25.070681   10710 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 13.49725ms
	W0408 11:02:25.070686   10710 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:25.070718   10710 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:25.070723   10710 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:30.072910   10710 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:30.073186   10710 start.go:364] duration metric: took 213.917µs to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0408 11:02:30.073284   10710 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:30.073301   10710 fix.go:54] fixHost starting: 
	I0408 11:02:30.073831   10710 fix.go:112] recreateIfNeeded on default-k8s-diff-port-664000: state=Stopped err=<nil>
	W0408 11:02:30.073846   10710 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:30.084138   10710 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	I0408 11:02:30.087236   10710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:9a:82:e0:14:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0408 11:02:30.096456   10710 main.go:141] libmachine: STDOUT: 
	I0408 11:02:30.096514   10710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:30.096599   10710 fix.go:56] duration metric: took 23.296791ms for fixHost
	I0408 11:02:30.096620   10710 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 23.415292ms
	W0408 11:02:30.096850   10710 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:30.106059   10710 out.go:177] 
	W0408 11:02:30.109165   10710 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:30.109193   10710 out.go:239] * 
	* 
	W0408 11:02:30.111932   10710 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:30.120086   10710 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (65.47375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-956000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (33.571083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-956000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-956000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-956000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.507459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-956000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-956000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (31.107958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-956000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (30.929834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-956000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-956000 --alsologtostderr -v=1: exit status 83 (44.646125ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-956000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-956000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:26.497395   10729 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:26.497776   10729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:26.497781   10729 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:26.497784   10729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:26.497948   10729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:26.498189   10729 out.go:298] Setting JSON to false
	I0408 11:02:26.498199   10729 mustload.go:65] Loading cluster: embed-certs-956000
	I0408 11:02:26.498612   10729 config.go:182] Loaded profile config "embed-certs-956000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:26.503550   10729 out.go:177] * The control-plane node embed-certs-956000 host is not running: state=Stopped
	I0408 11:02:26.507549   10729 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-956000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-956000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (30.858208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (31.068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-956000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-953000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-953000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.1: exit status 80 (9.780157083s)

                                                
                                                
-- stdout --
	* [newest-cni-953000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-953000" primary control-plane node in "newest-cni-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:26.973829   10752 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:26.973975   10752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:26.973978   10752 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:26.973981   10752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:26.974099   10752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:26.975196   10752 out.go:298] Setting JSON to false
	I0408 11:02:26.991355   10752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7316,"bootTime":1712592030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:26.991429   10752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:26.995838   10752 out.go:177] * [newest-cni-953000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:27.002824   10752 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:27.006799   10752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:27.002886   10752 notify.go:220] Checking for updates...
	I0408 11:02:27.012727   10752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:27.015794   10752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:27.018760   10752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:27.021759   10752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:27.025133   10752 config.go:182] Loaded profile config "default-k8s-diff-port-664000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:27.025197   10752 config.go:182] Loaded profile config "multinode-529000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:27.025253   10752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:27.029853   10752 out.go:177] * Using the qemu2 driver based on user configuration
	I0408 11:02:27.036753   10752 start.go:297] selected driver: qemu2
	I0408 11:02:27.036761   10752 start.go:901] validating driver "qemu2" against <nil>
	I0408 11:02:27.036768   10752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:27.039158   10752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0408 11:02:27.039184   10752 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0408 11:02:27.046741   10752 out.go:177] * Automatically selected the socket_vmnet network
	I0408 11:02:27.049920   10752 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 11:02:27.049966   10752 cni.go:84] Creating CNI manager for ""
	I0408 11:02:27.049974   10752 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:27.049979   10752 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:02:27.050016   10752 start.go:340] cluster config:
	{Name:newest-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:27.054944   10752 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:27.063736   10752 out.go:177] * Starting "newest-cni-953000" primary control-plane node in "newest-cni-953000" cluster
	I0408 11:02:27.066779   10752 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 11:02:27.066794   10752 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0408 11:02:27.066803   10752 cache.go:56] Caching tarball of preloaded images
	I0408 11:02:27.066867   10752 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:02:27.066873   10752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on docker
	I0408 11:02:27.066972   10752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/newest-cni-953000/config.json ...
	I0408 11:02:27.066986   10752 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/newest-cni-953000/config.json: {Name:mka5593107c1ebf489afed78212f7209bb1bb0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:02:27.067202   10752 start.go:360] acquireMachinesLock for newest-cni-953000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:27.067232   10752 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "newest-cni-953000"
	I0408 11:02:27.067243   10752 start.go:93] Provisioning new machine with config: &{Name:newest-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:02:27.067269   10752 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:02:27.074629   10752 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:02:27.091814   10752 start.go:159] libmachine.API.Create for "newest-cni-953000" (driver="qemu2")
	I0408 11:02:27.091843   10752 client.go:168] LocalClient.Create starting
	I0408 11:02:27.091915   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:02:27.091946   10752 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:27.091961   10752 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:27.092002   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:02:27.092029   10752 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:27.092038   10752 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:27.092407   10752 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:02:27.257239   10752 main.go:141] libmachine: Creating SSH key...
	I0408 11:02:27.300508   10752 main.go:141] libmachine: Creating Disk image...
	I0408 11:02:27.300513   10752 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:02:27.300732   10752 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:27.312837   10752 main.go:141] libmachine: STDOUT: 
	I0408 11:02:27.312865   10752 main.go:141] libmachine: STDERR: 
	I0408 11:02:27.312916   10752 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2 +20000M
	I0408 11:02:27.323580   10752 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:02:27.323607   10752 main.go:141] libmachine: STDERR: 
	I0408 11:02:27.323623   10752 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:27.323630   10752 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:02:27.323662   10752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c9:33:a4:18:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:27.325427   10752 main.go:141] libmachine: STDOUT: 
	I0408 11:02:27.325448   10752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:27.325471   10752 client.go:171] duration metric: took 233.619042ms to LocalClient.Create
	I0408 11:02:29.327777   10752 start.go:128] duration metric: took 2.260441333s to createHost
	I0408 11:02:29.327906   10752 start.go:83] releasing machines lock for "newest-cni-953000", held for 2.260644791s
	W0408 11:02:29.327964   10752 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:29.348132   10752 out.go:177] * Deleting "newest-cni-953000" in qemu2 ...
	W0408 11:02:29.378047   10752 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:29.378076   10752 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:34.380282   10752 start.go:360] acquireMachinesLock for newest-cni-953000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:34.380731   10752 start.go:364] duration metric: took 356.208µs to acquireMachinesLock for "newest-cni-953000"
	I0408 11:02:34.380967   10752 start.go:93] Provisioning new machine with config: &{Name:newest-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 11:02:34.381236   10752 start.go:125] createHost starting for "" (driver="qemu2")
	I0408 11:02:34.389953   10752 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:02:34.438641   10752 start.go:159] libmachine.API.Create for "newest-cni-953000" (driver="qemu2")
	I0408 11:02:34.438696   10752 client.go:168] LocalClient.Create starting
	I0408 11:02:34.438845   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/ca.pem
	I0408 11:02:34.438910   10752 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:34.438929   10752 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:34.438992   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18585-6624/.minikube/certs/cert.pem
	I0408 11:02:34.439034   10752 main.go:141] libmachine: Decoding PEM data...
	I0408 11:02:34.439049   10752 main.go:141] libmachine: Parsing certificate...
	I0408 11:02:34.439671   10752 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso...
	I0408 11:02:34.605736   10752 main.go:141] libmachine: Creating SSH key...
	I0408 11:02:34.650250   10752 main.go:141] libmachine: Creating Disk image...
	I0408 11:02:34.650256   10752 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0408 11:02:34.650484   10752 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:34.662708   10752 main.go:141] libmachine: STDOUT: 
	I0408 11:02:34.662725   10752 main.go:141] libmachine: STDERR: 
	I0408 11:02:34.662777   10752 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2 +20000M
	I0408 11:02:34.673658   10752 main.go:141] libmachine: STDOUT: Image resized.
	
	I0408 11:02:34.673697   10752 main.go:141] libmachine: STDERR: 
	I0408 11:02:34.673709   10752 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:34.673713   10752 main.go:141] libmachine: Starting QEMU VM...
	I0408 11:02:34.673747   10752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:93:67:a9:08:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:34.675697   10752 main.go:141] libmachine: STDOUT: 
	I0408 11:02:34.675712   10752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:34.675724   10752 client.go:171] duration metric: took 237.020041ms to LocalClient.Create
	I0408 11:02:36.678018   10752 start.go:128] duration metric: took 2.296720209s to createHost
	I0408 11:02:36.678104   10752 start.go:83] releasing machines lock for "newest-cni-953000", held for 2.297333s
	W0408 11:02:36.678586   10752 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:36.689009   10752 out.go:177] 
	W0408 11:02:36.697375   10752 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:36.697419   10752 out.go:239] * 
	* 
	W0408 11:02:36.700052   10752 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:36.711102   10752 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-953000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000: exit status 7 (72.276666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-664000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (33.087125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-664000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-664000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-664000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.916583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-664000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-664000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (30.70925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-664000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (31.000208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-664000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-664000 --alsologtostderr -v=1: exit status 83 (42.2365ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-664000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-664000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:30.393369   10776 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:30.393542   10776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:30.393545   10776 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:30.393548   10776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:30.393690   10776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:30.393917   10776 out.go:298] Setting JSON to false
	I0408 11:02:30.393925   10776 mustload.go:65] Loading cluster: default-k8s-diff-port-664000
	I0408 11:02:30.394115   10776 config.go:182] Loaded profile config "default-k8s-diff-port-664000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 11:02:30.398149   10776 out.go:177] * The control-plane node default-k8s-diff-port-664000 host is not running: state=Stopped
	I0408 11:02:30.401182   10776 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-664000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-664000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (31.149708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (30.763333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-953000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-953000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.1: exit status 80 (5.192418083s)

                                                
                                                
-- stdout --
	* [newest-cni-953000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-953000" primary control-plane node in "newest-cni-953000" cluster
	* Restarting existing qemu2 VM for "newest-cni-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:40.306639   10831 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:40.306791   10831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:40.306795   10831 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:40.306797   10831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:40.306921   10831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:40.307918   10831 out.go:298] Setting JSON to false
	I0408 11:02:40.323985   10831 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7330,"bootTime":1712592030,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 11:02:40.324053   10831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 11:02:40.329374   10831 out.go:177] * [newest-cni-953000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 11:02:40.337271   10831 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 11:02:40.337324   10831 notify.go:220] Checking for updates...
	I0408 11:02:40.344179   10831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 11:02:40.347269   10831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 11:02:40.350262   10831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:02:40.353307   10831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 11:02:40.356261   10831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:02:40.359598   10831 config.go:182] Loaded profile config "newest-cni-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.1
	I0408 11:02:40.359881   10831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:02:40.364262   10831 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 11:02:40.371294   10831 start.go:297] selected driver: qemu2
	I0408 11:02:40.371304   10831 start.go:901] validating driver "qemu2" against &{Name:newest-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:40.371358   10831 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:02:40.373741   10831 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 11:02:40.373789   10831 cni.go:84] Creating CNI manager for ""
	I0408 11:02:40.373797   10831 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 11:02:40.373820   10831 start.go:340] cluster config:
	{Name:newest-cni-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:newest-cni-953000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:02:40.378487   10831 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:02:40.387334   10831 out.go:177] * Starting "newest-cni-953000" primary control-plane node in "newest-cni-953000" cluster
	I0408 11:02:40.392251   10831 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 11:02:40.392271   10831 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0408 11:02:40.392279   10831 cache.go:56] Caching tarball of preloaded images
	I0408 11:02:40.392332   10831 preload.go:173] Found /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0408 11:02:40.392338   10831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.1 on docker
	I0408 11:02:40.392405   10831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/newest-cni-953000/config.json ...
	I0408 11:02:40.392939   10831 start.go:360] acquireMachinesLock for newest-cni-953000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:40.392966   10831 start.go:364] duration metric: took 20.5µs to acquireMachinesLock for "newest-cni-953000"
	I0408 11:02:40.392975   10831 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:40.392994   10831 fix.go:54] fixHost starting: 
	I0408 11:02:40.393115   10831 fix.go:112] recreateIfNeeded on newest-cni-953000: state=Stopped err=<nil>
	W0408 11:02:40.393123   10831 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:40.396312   10831 out.go:177] * Restarting existing qemu2 VM for "newest-cni-953000" ...
	I0408 11:02:40.403341   10831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:93:67:a9:08:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:40.405348   10831 main.go:141] libmachine: STDOUT: 
	I0408 11:02:40.405372   10831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:40.405400   10831 fix.go:56] duration metric: took 12.403833ms for fixHost
	I0408 11:02:40.405406   10831 start.go:83] releasing machines lock for "newest-cni-953000", held for 12.435834ms
	W0408 11:02:40.405411   10831 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:40.405439   10831 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:40.405444   10831 start.go:728] Will try again in 5 seconds ...
	I0408 11:02:45.407676   10831 start.go:360] acquireMachinesLock for newest-cni-953000: {Name:mkdd3581178bd4e31993e452c28670bf27696492 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:02:45.408042   10831 start.go:364] duration metric: took 286.166µs to acquireMachinesLock for "newest-cni-953000"
	I0408 11:02:45.408179   10831 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:02:45.408196   10831 fix.go:54] fixHost starting: 
	I0408 11:02:45.408948   10831 fix.go:112] recreateIfNeeded on newest-cni-953000: state=Stopped err=<nil>
	W0408 11:02:45.408973   10831 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:02:45.418343   10831 out.go:177] * Restarting existing qemu2 VM for "newest-cni-953000" ...
	I0408 11:02:45.422534   10831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:93:67:a9:08:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18585-6624/.minikube/machines/newest-cni-953000/disk.qcow2
	I0408 11:02:45.431774   10831 main.go:141] libmachine: STDOUT: 
	I0408 11:02:45.431861   10831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0408 11:02:45.431978   10831 fix.go:56] duration metric: took 23.78025ms for fixHost
	I0408 11:02:45.432005   10831 start.go:83] releasing machines lock for "newest-cni-953000", held for 23.935542ms
	W0408 11:02:45.432215   10831 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0408 11:02:45.440348   10831 out.go:177] 
	W0408 11:02:45.444358   10831 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0408 11:02:45.444381   10831 out.go:239] * 
	* 
	W0408 11:02:45.447086   10831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:02:45.454280   10831 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-953000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000: exit status 7 (70.184542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-953000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-rc.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.30.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000: exit status 7 (32.394083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-953000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-953000 --alsologtostderr -v=1: exit status 83 (43.484583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-953000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-953000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:02:45.645273   10845 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:02:45.645450   10845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:45.645454   10845 out.go:304] Setting ErrFile to fd 2...
	I0408 11:02:45.645456   10845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:02:45.645576   10845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 11:02:45.645792   10845 out.go:298] Setting JSON to false
	I0408 11:02:45.645801   10845 mustload.go:65] Loading cluster: newest-cni-953000
	I0408 11:02:45.645998   10845 config.go:182] Loaded profile config "newest-cni-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-rc.1
	I0408 11:02:45.650394   10845 out.go:177] * The control-plane node newest-cni-953000 host is not running: state=Stopped
	I0408 11:02:45.653344   10845 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-953000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-953000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000: exit status 7 (31.912959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-953000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000: exit status 7 (32.997334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.29.3/json-events 10.07
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-rc.1/json-events 9.9
22 TestDownloadOnly/v1.30.0-rc.1/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.1/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.1/LogsDuration 0.09
27 TestDownloadOnly/v1.30.0-rc.1/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.59
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 8.68
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.84
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 0.27
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.62
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 2.02
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.34
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 1.03
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.6
267 TestNoKubernetes/serial/Stop 3.23
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
284 TestStartStop/group/old-k8s-version/serial/Stop 2.96
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 1.97
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/embed-certs/serial/Stop 1.9
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.35
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.28
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-557000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-557000: exit status 85 (100.215375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT |          |
	|         | -p download-only-557000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 10:35:31
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 10:35:31.200047    7045 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:35:31.200220    7045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:35:31.200223    7045 out.go:304] Setting ErrFile to fd 2...
	I0408 10:35:31.200225    7045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:35:31.200357    7045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	W0408 10:35:31.200437    7045 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18585-6624/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18585-6624/.minikube/config/config.json: no such file or directory
	I0408 10:35:31.201741    7045 out.go:298] Setting JSON to true
	I0408 10:35:31.220355    7045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5701,"bootTime":1712592030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:35:31.220419    7045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:35:31.226422    7045 out.go:97] [download-only-557000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:35:31.230433    7045 out.go:169] MINIKUBE_LOCATION=18585
	I0408 10:35:31.226546    7045 notify.go:220] Checking for updates...
	W0408 10:35:31.226573    7045 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 10:35:31.238416    7045 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:35:31.242104    7045 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:35:31.245521    7045 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:35:31.248467    7045 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	W0408 10:35:31.255452    7045 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 10:35:31.255638    7045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:35:31.259220    7045 out.go:97] Using the qemu2 driver based on user configuration
	I0408 10:35:31.259228    7045 start.go:297] selected driver: qemu2
	I0408 10:35:31.259244    7045 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:35:31.259328    7045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:35:31.262801    7045 out.go:169] Automatically selected the socket_vmnet network
	I0408 10:35:31.269321    7045 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 10:35:31.269425    7045 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:35:31.269506    7045 cni.go:84] Creating CNI manager for ""
	I0408 10:35:31.269525    7045 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 10:35:31.269570    7045 start.go:340] cluster config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:35:31.275207    7045 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:35:31.279007    7045 out.go:97] Downloading VM boot image ...
	I0408 10:35:31.279033    7045 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/iso/arm64/minikube-v1.33.0-1712570768-18585-arm64.iso
	I0408 10:35:40.437397    7045 out.go:97] Starting "download-only-557000" primary control-plane node in "download-only-557000" cluster
	I0408 10:35:40.437415    7045 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:35:40.501666    7045 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 10:35:40.501685    7045 cache.go:56] Caching tarball of preloaded images
	I0408 10:35:40.501883    7045 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:35:40.506076    7045 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 10:35:40.506085    7045 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:35:40.587245    7045 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0408 10:35:58.305892    7045 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:35:58.306101    7045 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:35:59.003872    7045 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0408 10:35:59.004072    7045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/download-only-557000/config.json ...
	I0408 10:35:59.004098    7045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/download-only-557000/config.json: {Name:mkf18c9815c3e0af2ad0f2abf2eb9a78416f266f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:35:59.004357    7045 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 10:35:59.004545    7045 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0408 10:35:59.693916    7045 out.go:169] 
	W0408 10:35:59.704010    7045 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108477260 0x108477260 0x108477260 0x108477260 0x108477260 0x108477260 0x108477260] Decompressors:map[bz2:0x1400000f160 gz:0x1400000f168 tar:0x1400000f0f0 tar.bz2:0x1400000f110 tar.gz:0x1400000f120 tar.xz:0x1400000f130 tar.zst:0x1400000f140 tbz2:0x1400000f110 tgz:0x1400000f120 txz:0x1400000f130 tzst:0x1400000f140 xz:0x1400000f170 zip:0x1400000f190 zst:0x1400000f178] Getters:map[file:0x14002188560 http:0x14000886320 https:0x14000886370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0408 10:35:59.704049    7045 out_reason.go:110] 
	W0408 10:35:59.712929    7045 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 10:35:59.715907    7045 out.go:169] 
	
	
	* The control-plane node download-only-557000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-557000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-557000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (10.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-702000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-702000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (10.0706345s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (10.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-702000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-702000: exit status 85 (78.28675ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT |                     |
	|         | -p download-only-557000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-557000        | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -o=json --download-only        | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | -p download-only-702000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 10:36:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 10:36:00.387499    7086 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:36:00.387623    7086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:36:00.387626    7086 out.go:304] Setting ErrFile to fd 2...
	I0408 10:36:00.387628    7086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:36:00.387747    7086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:36:00.388859    7086 out.go:298] Setting JSON to true
	I0408 10:36:00.405097    7086 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5730,"bootTime":1712592030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:36:00.405175    7086 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:36:00.410622    7086 out.go:97] [download-only-702000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:36:00.414581    7086 out.go:169] MINIKUBE_LOCATION=18585
	I0408 10:36:00.410716    7086 notify.go:220] Checking for updates...
	I0408 10:36:00.422581    7086 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:36:00.425589    7086 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:36:00.428648    7086 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:36:00.431628    7086 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	W0408 10:36:00.437605    7086 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 10:36:00.437776    7086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:36:00.440506    7086 out.go:97] Using the qemu2 driver based on user configuration
	I0408 10:36:00.440512    7086 start.go:297] selected driver: qemu2
	I0408 10:36:00.440515    7086 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:36:00.440555    7086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:36:00.443578    7086 out.go:169] Automatically selected the socket_vmnet network
	I0408 10:36:00.448787    7086 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 10:36:00.448877    7086 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:36:00.448919    7086 cni.go:84] Creating CNI manager for ""
	I0408 10:36:00.448931    7086 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:36:00.448936    7086 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:36:00.448986    7086 start.go:340] cluster config:
	{Name:download-only-702000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-702000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:36:00.453427    7086 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:36:00.456587    7086 out.go:97] Starting "download-only-702000" primary control-plane node in "download-only-702000" cluster
	I0408 10:36:00.456593    7086 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:36:00.513788    7086 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:36:00.513809    7086 cache.go:56] Caching tarball of preloaded images
	I0408 10:36:00.513955    7086 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:36:00.519095    7086 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0408 10:36:00.519103    7086 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:36:00.616563    7086 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0408 10:36:08.307512    7086 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:36:08.307669    7086 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:36:08.864992    7086 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0408 10:36:08.865188    7086 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/download-only-702000/config.json ...
	I0408 10:36:08.865209    7086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18585-6624/.minikube/profiles/download-only-702000/config.json: {Name:mk92677d5ef5ecd53ecc9a9e6369be3c7d64faa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 10:36:08.865430    7086 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0408 10:36:08.865542    7086 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-702000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-702000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-702000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/json-events (9.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-347000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-347000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=docker --driver=qemu2 : (9.895137833s)
--- PASS: TestDownloadOnly/v1.30.0-rc.1/json-events (9.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-347000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-347000: exit status 85 (91.727208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT |                     |
	|         | -p download-only-557000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:35 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-557000           | download-only-557000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -o=json --download-only           | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | -p download-only-702000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| delete  | -p download-only-702000           | download-only-702000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT | 08 Apr 24 10:36 PDT |
	| start   | -o=json --download-only           | download-only-347000 | jenkins | v1.33.0-beta.0 | 08 Apr 24 10:36 PDT |                     |
	|         | -p download-only-347000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=qemu2                    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 10:36:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 10:36:11.003834    7120 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:36:11.003965    7120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:36:11.003968    7120 out.go:304] Setting ErrFile to fd 2...
	I0408 10:36:11.003971    7120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:36:11.004091    7120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:36:11.005249    7120 out.go:298] Setting JSON to true
	I0408 10:36:11.021382    7120 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5741,"bootTime":1712592030,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:36:11.021453    7120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:36:11.026314    7120 out.go:97] [download-only-347000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:36:11.031232    7120 out.go:169] MINIKUBE_LOCATION=18585
	I0408 10:36:11.026397    7120 notify.go:220] Checking for updates...
	I0408 10:36:11.040212    7120 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:36:11.043280    7120 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:36:11.046248    7120 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:36:11.049293    7120 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	W0408 10:36:11.055127    7120 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 10:36:11.055325    7120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:36:11.058170    7120 out.go:97] Using the qemu2 driver based on user configuration
	I0408 10:36:11.058178    7120 start.go:297] selected driver: qemu2
	I0408 10:36:11.058182    7120 start.go:901] validating driver "qemu2" against <nil>
	I0408 10:36:11.058235    7120 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 10:36:11.061185    7120 out.go:169] Automatically selected the socket_vmnet network
	I0408 10:36:11.064761    7120 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0408 10:36:11.064863    7120 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 10:36:11.064910    7120 cni.go:84] Creating CNI manager for ""
	I0408 10:36:11.064919    7120 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 10:36:11.064926    7120 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 10:36:11.064963    7120 start.go:340] cluster config:
	{Name:download-only-347000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.1 ClusterName:download-only-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:36:11.069190    7120 iso.go:125] acquiring lock: {Name:mk1a743390c76b4f859277885e86152612ebf514 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 10:36:11.072196    7120 out.go:97] Starting "download-only-347000" primary control-plane node in "download-only-347000" cluster
	I0408 10:36:11.072206    7120 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 10:36:11.165852    7120 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4
	I0408 10:36:11.165873    7120 cache.go:56] Caching tarball of preloaded images
	I0408 10:36:11.166083    7120 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.1 and runtime docker
	I0408 10:36:11.171225    7120 out.go:97] Downloading Kubernetes v1.30.0-rc.1 preload ...
	I0408 10:36:11.171236    7120 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I0408 10:36:11.274873    7120 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.1/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4?checksum=md5:e6c4e749e1d3aa9e638b1a53bc03af67 -> /Users/jenkins/minikube-integration/18585-6624/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-347000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-347000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-347000
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-035000 --alsologtostderr --binary-mirror http://127.0.0.1:51060 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-035000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-610000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-610000: exit status 85 (56.030833ms)

                                                
                                                
-- stdout --
	* Profile "addons-610000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-610000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-610000: exit status 85 (58.784875ms)

                                                
                                                
-- stdout --
	* Profile "addons-610000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.59s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status: exit status 7 (32.386042ms)

                                                
                                                
-- stdout --
	nospam-898000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status: exit status 7 (31.0325ms)

                                                
                                                
-- stdout --
	nospam-898000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status: exit status 7 (31.215333ms)

                                                
                                                
-- stdout --
	nospam-898000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause: exit status 83 (49.530708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-898000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause: exit status 83 (42.829625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-898000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause: exit status 83 (40.637375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-898000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause: exit status 83 (40.915791ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-898000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause: exit status 83 (40.900667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-898000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause: exit status 83 (47.674583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-898000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (8.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop: (3.766949291s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop: (3.04301775s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop: (1.863040917s)
--- PASS: TestErrorSpam/stop (8.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18585-6624/.minikube/files/etc/test/nested/copy/7043/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3473876902/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cache add minikube-local-cache-test:functional-193000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 cache delete minikube-local-cache-test:functional-193000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-193000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 config get cpus: exit status 14 (32.925208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 config get cpus: exit status 14 (33.57875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-193000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-193000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (156.596375ms)

                                                
                                                
-- stdout --
	* [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:38:00.065384    7741 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:38:00.065561    7741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.065566    7741 out.go:304] Setting ErrFile to fd 2...
	I0408 10:38:00.065570    7741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.065746    7741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:38:00.067179    7741 out.go:298] Setting JSON to false
	I0408 10:38:00.088652    7741 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5850,"bootTime":1712592030,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:38:00.088730    7741 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:38:00.094129    7741 out.go:177] * [functional-193000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0408 10:38:00.105077    7741 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:38:00.101013    7741 notify.go:220] Checking for updates...
	I0408 10:38:00.110995    7741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:38:00.114127    7741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:38:00.117056    7741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:38:00.118289    7741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:38:00.121042    7741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:38:00.124395    7741 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:38:00.124693    7741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:38:00.128937    7741 out.go:177] * Using the qemu2 driver based on existing profile
	I0408 10:38:00.136051    7741 start.go:297] selected driver: qemu2
	I0408 10:38:00.136060    7741 start.go:901] validating driver "qemu2" against &{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:38:00.136118    7741 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:38:00.143033    7741 out.go:177] 
	W0408 10:38:00.147071    7741 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 10:38:00.151081    7741 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-193000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-193000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-193000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.852209ms)

                                                
                                                
-- stdout --
	* [functional-193000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 10:38:00.303039    7752 out.go:291] Setting OutFile to fd 1 ...
	I0408 10:38:00.303149    7752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.303152    7752 out.go:304] Setting ErrFile to fd 2...
	I0408 10:38:00.303155    7752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 10:38:00.303287    7752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18585-6624/.minikube/bin
	I0408 10:38:00.304769    7752 out.go:298] Setting JSON to false
	I0408 10:38:00.321535    7752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5850,"bootTime":1712592030,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0408 10:38:00.321612    7752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 10:38:00.327092    7752 out.go:177] * [functional-193000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	I0408 10:38:00.334100    7752 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 10:38:00.338073    7752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	I0408 10:38:00.334154    7752 notify.go:220] Checking for updates...
	I0408 10:38:00.341009    7752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0408 10:38:00.344054    7752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 10:38:00.347067    7752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	I0408 10:38:00.350020    7752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 10:38:00.353344    7752 config.go:182] Loaded profile config "functional-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0408 10:38:00.353609    7752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 10:38:00.358067    7752 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0408 10:38:00.365015    7752 start.go:297] selected driver: qemu2
	I0408 10:38:00.365022    7752 start.go:901] validating driver "qemu2" against &{Name:functional-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 10:38:00.365072    7752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 10:38:00.371951    7752 out.go:177] 
	W0408 10:38:00.376049    7752 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 10:38:00.379112    7752 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.583252083s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-193000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image rm gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-193000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 image save --daemon gcr.io/google-containers/addon-resizer:functional-193000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-193000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.468584ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.859208ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "70.668541ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.535958ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.0123115s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-193000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-193000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-193000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-193000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-860000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-860000 --output=json --user=testUser: (2.017454292s)
--- PASS: TestJSONOutput/stop/Command (2.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-583000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-583000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (107.92ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a100910f-4e36-4406-aa1a-5c13e99f65f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-583000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f41d04f2-741a-486a-bdd3-c60bdf5c084d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18585"}}
	{"specversion":"1.0","id":"649d5329-40b2-4edb-9c00-0912fe84596c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig"}}
	{"specversion":"1.0","id":"515814a3-7dc9-475e-9dd2-c52c8fa1c3c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c626fbd5-5b12-43ab-a5f9-13f7c7574e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"95f4a7ec-b49d-4c24-932c-31f23062d245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube"}}
	{"specversion":"1.0","id":"0a4aa582-9f2a-4082-8e58-723b38999587","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c0850932-10f0-4cb0-a9b2-aaa11ee00b35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-583000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-583000
--- PASS: TestErrorJSONOutput (0.34s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-535000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.709ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-535000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18585-6624/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18585-6624/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-535000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-535000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.696667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-535000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-535000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.758282584s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.841042625s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-535000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-535000: (3.230406167s)
--- PASS: TestNoKubernetes/serial/Stop (3.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-535000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-535000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.589917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-535000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-535000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-476000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-522000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-522000 --alsologtostderr -v=3: (2.960933167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-522000 -n old-k8s-version-522000: exit status 7 (55.110625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-522000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-042000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-042000 --alsologtostderr -v=3: (1.968329333s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (53.7235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-042000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-956000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-956000 --alsologtostderr -v=3: (1.898469666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-956000 -n embed-certs-956000: exit status 7 (60.235333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-956000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-664000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-664000 --alsologtostderr -v=3: (3.352882458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (58.1935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-664000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-953000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-953000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-953000 --alsologtostderr -v=3: (3.282639167s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-953000 -n newest-cni-953000: exit status 7 (60.6115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-953000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port786341399/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712597846405474000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port786341399/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712597846405474000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port786341399/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712597846405474000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port786341399/001/test-1712597846405474000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.93625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.387334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.844875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.132083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.955625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.263417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.066084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo umount -f /mount-9p": exit status 83 (46.741583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port786341399/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4078771234/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (64.020583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.329875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.384541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.456209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.548125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.404333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.940958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "sudo umount -f /mount-9p": exit status 83 (46.772875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-193000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4078771234/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup921266306/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup921266306/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup921266306/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (88.008708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (82.768125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (85.928375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (89.015375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (88.576334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (86.827708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-193000 ssh "findmnt -T" /mount1: exit status 83 (86.9125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-193000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup921266306/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup921266306/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-193000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup921266306/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.76s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-363000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-363000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-363000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-363000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-363000"

                                                
                                                
----------------------- debugLogs end: cilium-363000 [took: 2.277825417s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-363000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-363000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-089000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-089000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard