Test Report: QEMU_macOS 18669

                    
                      cfcc925aedaed70a8d6bc80f04f086c17ea387e6:2024-04-19:34110
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.82
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.12
27 TestAddons/Setup 10.31
28 TestCertOptions 10.06
29 TestCertExpiration 195.03
30 TestDockerFlags 10
31 TestForceSystemdFlag 11.14
32 TestForceSystemdEnv 10.37
38 TestErrorSpam/setup 9.79
47 TestFunctional/serial/StartWithProxy 9.86
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.95
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.09
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.13
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.3
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.59
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.53
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 32.85
141 TestMultiControlPlane/serial/StartCluster 10.06
142 TestMultiControlPlane/serial/DeployApp 119.42
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 49.91
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.29
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.24
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 9.87
165 TestJSONOutput/start/Command 9.84
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.23
197 TestMountStart/serial/StartWithMountFirst 9.88
200 TestMultiNode/serial/FreshStart2Nodes 10
201 TestMultiNode/serial/DeployApp2Nodes 93.95
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 50.81
209 TestMultiNode/serial/RestartKeepsNodes 9.1
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.25
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.17
217 TestPreload 10.1
219 TestScheduledStopUnix 10.04
220 TestSkaffold 11.98
223 TestRunningBinaryUpgrade 583.76
225 TestKubernetesUpgrade 18.79
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.19
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.09
241 TestStoppedBinaryUpgrade/Upgrade 574.45
243 TestPause/serial/Start 9.87
253 TestNoKubernetes/serial/StartWithK8s 9.8
254 TestNoKubernetes/serial/StartWithStopK8s 5.35
255 TestNoKubernetes/serial/Start 5.33
259 TestNoKubernetes/serial/StartNoArgs 5.34
261 TestNetworkPlugins/group/auto/Start 9.76
262 TestNetworkPlugins/group/kindnet/Start 9.97
263 TestNetworkPlugins/group/calico/Start 9.77
264 TestNetworkPlugins/group/custom-flannel/Start 9.78
265 TestNetworkPlugins/group/false/Start 9.76
266 TestNetworkPlugins/group/enable-default-cni/Start 9.74
267 TestNetworkPlugins/group/flannel/Start 9.79
268 TestNetworkPlugins/group/bridge/Start 9.79
269 TestNetworkPlugins/group/kubenet/Start 9.77
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.75
272 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 10.02
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.21
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 11.44
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.78
297 TestStartStop/group/embed-certs/serial/DeployApp 0.09
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
301 TestStartStop/group/embed-certs/serial/SecondStart 5.27
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
305 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/embed-certs/serial/Pause 0.11
311 TestStartStop/group/newest-cni/serial/FirstStart 10.06
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.32
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (16.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-668000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-668000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (16.822316458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ced4ab03-268d-4747-9da4-080b4d117f20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-668000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5949b339-ad4e-4027-8725-587f4b4507e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18669"}}
	{"specversion":"1.0","id":"80028ccf-8583-4c9c-bca1-059d7d5c1964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig"}}
	{"specversion":"1.0","id":"819c41d3-3123-4ddc-98c0-8771a30a1a0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e05e713a-0ea6-414b-ae6d-a61766f02411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd440d9c-e55e-42d9-9af5-95e63920e2db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube"}}
	{"specversion":"1.0","id":"6d8ff4b7-6246-4321-8b5c-a11a1f63c12b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"f5785214-2422-4c7e-bcd3-ee1d67e8b702","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9726f2cd-4c3f-4b8f-9c90-520b77bb53cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"888ae61e-b2d8-4bc6-a172-0cf8170e5f9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b63ef403-19d4-4fbe-bae5-422c8342523e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-668000\" primary control-plane node in \"download-only-668000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ee99cda-53d2-46ac-ac90-375fe1089e95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"972bba4a-89b0-46cf-b79a-5d90402330f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00] Decompressors:map[bz2:0x140007e3b50 gz:0x140007e3b58 tar:0x140007e3b00 tar.bz2:0x140007e3b10 tar.gz:0x140007e3b20 tar.xz:0x140007e3b30 tar.zst:0x140007e3b40 tbz2:0x140007e3b10 tgz:0x14
0007e3b20 txz:0x140007e3b30 tzst:0x140007e3b40 xz:0x140007e3b60 zip:0x140007e3b70 zst:0x140007e3b68] Getters:map[file:0x14002460580 http:0x140006b0370 https:0x140006b04b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"923ad9c6-4487-47ca-854e-c2ec2ab2363d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:23:10.542052    7306 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:23:10.542208    7306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:10.542212    7306 out.go:304] Setting ErrFile to fd 2...
	I0419 12:23:10.542214    7306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:10.542337    7306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	W0419 12:23:10.542425    7306 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18669-6895/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18669-6895/.minikube/config/config.json: no such file or directory
	I0419 12:23:10.543658    7306 out.go:298] Setting JSON to true
	I0419 12:23:10.561782    7306 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4961,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:23:10.561854    7306 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:23:10.565901    7306 out.go:97] [download-only-668000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:23:10.569930    7306 out.go:169] MINIKUBE_LOCATION=18669
	I0419 12:23:10.566054    7306 notify.go:220] Checking for updates...
	W0419 12:23:10.566086    7306 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball: no such file or directory
	I0419 12:23:10.576813    7306 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:23:10.581060    7306 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:23:10.583945    7306 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:23:10.586976    7306 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	W0419 12:23:10.594364    7306 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 12:23:10.594550    7306 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:23:10.597770    7306 out.go:97] Using the qemu2 driver based on user configuration
	I0419 12:23:10.597789    7306 start.go:297] selected driver: qemu2
	I0419 12:23:10.597804    7306 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:23:10.597916    7306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:23:10.602003    7306 out.go:169] Automatically selected the socket_vmnet network
	I0419 12:23:10.607887    7306 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0419 12:23:10.607978    7306 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 12:23:10.608042    7306 cni.go:84] Creating CNI manager for ""
	I0419 12:23:10.608058    7306 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0419 12:23:10.608113    7306 start.go:340] cluster config:
	{Name:download-only-668000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-668000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:23:10.613782    7306 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:23:10.618118    7306 out.go:97] Downloading VM boot image ...
	I0419 12:23:10.618146    7306 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso
	I0419 12:23:18.520189    7306 out.go:97] Starting "download-only-668000" primary control-plane node in "download-only-668000" cluster
	I0419 12:23:18.520221    7306 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:23:18.578847    7306 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:18.578855    7306 cache.go:56] Caching tarball of preloaded images
	I0419 12:23:18.579636    7306 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:23:18.584978    7306 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0419 12:23:18.584984    7306 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:18.665849    7306 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:25.900977    7306 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:25.901148    7306 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:26.598112    7306 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0419 12:23:26.598320    7306 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/download-only-668000/config.json ...
	I0419 12:23:26.598336    7306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/download-only-668000/config.json: {Name:mkc40379667fdfa62985ca9f1f652f71efaabcdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:23:26.598566    7306 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:23:26.598749    7306 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0419 12:23:27.283349    7306 out.go:169] 
	W0419 12:23:27.288352    7306 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00] Decompressors:map[bz2:0x140007e3b50 gz:0x140007e3b58 tar:0x140007e3b00 tar.bz2:0x140007e3b10 tar.gz:0x140007e3b20 tar.xz:0x140007e3b30 tar.zst:0x140007e3b40 tbz2:0x140007e3b10 tgz:0x140007e3b20 txz:0x140007e3b30 tzst:0x140007e3b40 xz:0x140007e3b60 zip:0x140007e3b70 zst:0x140007e3b68] Getters:map[file:0x14002460580 http:0x140006b0370 https:0x140006b04b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0419 12:23:27.288393    7306 out_reason.go:110] 
	W0419 12:23:27.296251    7306 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:23:27.300237    7306 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-668000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (16.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-257000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-257000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.968210167s)

                                                
                                                
-- stdout --
	* [offline-docker-257000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-257000" primary control-plane node in "offline-docker-257000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-257000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:35:24.595062    8837 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:35:24.595233    8837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:24.595236    8837 out.go:304] Setting ErrFile to fd 2...
	I0419 12:35:24.595239    8837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:24.595380    8837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:35:24.596540    8837 out.go:298] Setting JSON to false
	I0419 12:35:24.613950    8837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5695,"bootTime":1713549629,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:35:24.614029    8837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:35:24.617911    8837 out.go:177] * [offline-docker-257000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:35:24.624935    8837 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:35:24.625026    8837 notify.go:220] Checking for updates...
	I0419 12:35:24.631878    8837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:35:24.634876    8837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:35:24.637926    8837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:35:24.640880    8837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:35:24.643781    8837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:35:24.647217    8837 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:35:24.647282    8837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:35:24.650854    8837 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:35:24.657881    8837 start.go:297] selected driver: qemu2
	I0419 12:35:24.657891    8837 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:35:24.657899    8837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:35:24.659903    8837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:35:24.662859    8837 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:35:24.664171    8837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:35:24.664211    8837 cni.go:84] Creating CNI manager for ""
	I0419 12:35:24.664218    8837 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:35:24.664221    8837 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:35:24.664255    8837 start.go:340] cluster config:
	{Name:offline-docker-257000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-257000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:35:24.668874    8837 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:35:24.675876    8837 out.go:177] * Starting "offline-docker-257000" primary control-plane node in "offline-docker-257000" cluster
	I0419 12:35:24.679822    8837 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:35:24.679858    8837 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:35:24.679868    8837 cache.go:56] Caching tarball of preloaded images
	I0419 12:35:24.679944    8837 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:35:24.679950    8837 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:35:24.680013    8837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/offline-docker-257000/config.json ...
	I0419 12:35:24.680027    8837 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/offline-docker-257000/config.json: {Name:mk5b76a02b6ed7e6b98cd8131bc326b3757f0e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:35:24.680337    8837 start.go:360] acquireMachinesLock for offline-docker-257000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:24.680372    8837 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "offline-docker-257000"
	I0419 12:35:24.680383    8837 start.go:93] Provisioning new machine with config: &{Name:offline-docker-257000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-257000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:24.680422    8837 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:24.684890    8837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:24.700975    8837 start.go:159] libmachine.API.Create for "offline-docker-257000" (driver="qemu2")
	I0419 12:35:24.701006    8837 client.go:168] LocalClient.Create starting
	I0419 12:35:24.701075    8837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:24.701106    8837 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:24.701117    8837 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:24.701166    8837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:24.701188    8837 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:24.701199    8837 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:24.701574    8837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:24.829071    8837 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:25.105304    8837 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:25.105315    8837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:25.105490    8837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2
	I0419 12:35:25.118946    8837 main.go:141] libmachine: STDOUT: 
	I0419 12:35:25.118969    8837 main.go:141] libmachine: STDERR: 
	I0419 12:35:25.119033    8837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2 +20000M
	I0419 12:35:25.131492    8837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:25.131514    8837 main.go:141] libmachine: STDERR: 
	I0419 12:35:25.131535    8837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2
	I0419 12:35:25.131541    8837 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:25.131571    8837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:d8:02:08:4a:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2
	I0419 12:35:25.133614    8837 main.go:141] libmachine: STDOUT: 
	I0419 12:35:25.133635    8837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:25.133658    8837 client.go:171] duration metric: took 432.653ms to LocalClient.Create
	I0419 12:35:27.135688    8837 start.go:128] duration metric: took 2.4553125s to createHost
	I0419 12:35:27.135711    8837 start.go:83] releasing machines lock for "offline-docker-257000", held for 2.455390417s
	W0419 12:35:27.135728    8837 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:27.140353    8837 out.go:177] * Deleting "offline-docker-257000" in qemu2 ...
	W0419 12:35:27.150187    8837 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:27.150205    8837 start.go:728] Will try again in 5 seconds ...
	I0419 12:35:32.152343    8837 start.go:360] acquireMachinesLock for offline-docker-257000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:32.152833    8837 start.go:364] duration metric: took 409.458µs to acquireMachinesLock for "offline-docker-257000"
	I0419 12:35:32.152987    8837 start.go:93] Provisioning new machine with config: &{Name:offline-docker-257000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-257000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:32.153234    8837 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:32.162912    8837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:32.212758    8837 start.go:159] libmachine.API.Create for "offline-docker-257000" (driver="qemu2")
	I0419 12:35:32.212806    8837 client.go:168] LocalClient.Create starting
	I0419 12:35:32.212915    8837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:32.212982    8837 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:32.213001    8837 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:32.213094    8837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:32.213140    8837 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:32.213156    8837 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:32.213673    8837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:32.379107    8837 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:32.467367    8837 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:32.467373    8837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:32.467566    8837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2
	I0419 12:35:32.480126    8837 main.go:141] libmachine: STDOUT: 
	I0419 12:35:32.480151    8837 main.go:141] libmachine: STDERR: 
	I0419 12:35:32.480209    8837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2 +20000M
	I0419 12:35:32.491063    8837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:32.491079    8837 main.go:141] libmachine: STDERR: 
	I0419 12:35:32.491088    8837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2
	I0419 12:35:32.491092    8837 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:32.491121    8837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:eb:05:e0:91:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/offline-docker-257000/disk.qcow2
	I0419 12:35:32.492711    8837 main.go:141] libmachine: STDOUT: 
	I0419 12:35:32.492734    8837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:32.492746    8837 client.go:171] duration metric: took 279.941375ms to LocalClient.Create
	I0419 12:35:34.494839    8837 start.go:128] duration metric: took 2.341631625s to createHost
	I0419 12:35:34.494891    8837 start.go:83] releasing machines lock for "offline-docker-257000", held for 2.342083875s
	W0419 12:35:34.495153    8837 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-257000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-257000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:34.503543    8837 out.go:177] 
	W0419 12:35:34.507468    8837 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:35:34.507529    8837 out.go:239] * 
	* 
	W0419 12:35:34.510110    8837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:35:34.519510    8837 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-257000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-04-19 12:35:34.532099 -0700 PDT m=+744.093881667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-257000 -n offline-docker-257000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-257000 -n offline-docker-257000: exit status 7 (47.4455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-257000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-257000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-257000
--- FAIL: TestOffline (10.12s)

                                                
                                    
x
+
TestAddons/Setup (10.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.304458917s)

                                                
                                                
-- stdout --
	* [addons-040000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-040000" primary control-plane node in "addons-040000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:23:56.744061    7420 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:23:56.744210    7420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:56.744213    7420 out.go:304] Setting ErrFile to fd 2...
	I0419 12:23:56.744215    7420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:56.744334    7420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:23:56.745372    7420 out.go:298] Setting JSON to false
	I0419 12:23:56.761402    7420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5007,"bootTime":1713549629,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:23:56.761470    7420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:23:56.765712    7420 out.go:177] * [addons-040000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:23:56.771671    7420 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:23:56.775672    7420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:23:56.771719    7420 notify.go:220] Checking for updates...
	I0419 12:23:56.781573    7420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:23:56.784686    7420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:23:56.787687    7420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:23:56.789001    7420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:23:56.791794    7420 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:23:56.795715    7420 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:23:56.801658    7420 start.go:297] selected driver: qemu2
	I0419 12:23:56.801664    7420 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:23:56.801670    7420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:23:56.803848    7420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:23:56.806652    7420 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:23:56.809851    7420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:23:56.809893    7420 cni.go:84] Creating CNI manager for ""
	I0419 12:23:56.809901    7420 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:23:56.809907    7420 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:23:56.809951    7420 start.go:340] cluster config:
	{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:23:56.814557    7420 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:23:56.822698    7420 out.go:177] * Starting "addons-040000" primary control-plane node in "addons-040000" cluster
	I0419 12:23:56.826531    7420 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:23:56.826544    7420 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:56.826553    7420 cache.go:56] Caching tarball of preloaded images
	I0419 12:23:56.826607    7420 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:23:56.826612    7420 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:23:56.826841    7420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/addons-040000/config.json ...
	I0419 12:23:56.826853    7420 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/addons-040000/config.json: {Name:mkb9b1b121bb64bdfc16a598665f24e76f6940e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:23:56.827241    7420 start.go:360] acquireMachinesLock for addons-040000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:23:56.827303    7420 start.go:364] duration metric: took 55.542µs to acquireMachinesLock for "addons-040000"
	I0419 12:23:56.827313    7420 start.go:93] Provisioning new machine with config: &{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:23:56.827344    7420 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:23:56.833653    7420 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0419 12:23:56.852761    7420 start.go:159] libmachine.API.Create for "addons-040000" (driver="qemu2")
	I0419 12:23:56.852803    7420 client.go:168] LocalClient.Create starting
	I0419 12:23:56.852934    7420 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:23:56.903389    7420 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:23:57.107536    7420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:23:57.253670    7420 main.go:141] libmachine: Creating SSH key...
	I0419 12:23:57.413551    7420 main.go:141] libmachine: Creating Disk image...
	I0419 12:23:57.413558    7420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:23:57.413759    7420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2
	I0419 12:23:57.426676    7420 main.go:141] libmachine: STDOUT: 
	I0419 12:23:57.426699    7420 main.go:141] libmachine: STDERR: 
	I0419 12:23:57.426761    7420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2 +20000M
	I0419 12:23:57.437671    7420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:23:57.437694    7420 main.go:141] libmachine: STDERR: 
	I0419 12:23:57.437710    7420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2
	I0419 12:23:57.437716    7420 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:23:57.437755    7420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:7e:8f:32:b0:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2
	I0419 12:23:57.439477    7420 main.go:141] libmachine: STDOUT: 
	I0419 12:23:57.439498    7420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:23:57.439515    7420 client.go:171] duration metric: took 586.719792ms to LocalClient.Create
	I0419 12:23:59.441644    7420 start.go:128] duration metric: took 2.614336417s to createHost
	I0419 12:23:59.441702    7420 start.go:83] releasing machines lock for "addons-040000", held for 2.614448667s
	W0419 12:23:59.441810    7420 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:23:59.453111    7420 out.go:177] * Deleting "addons-040000" in qemu2 ...
	W0419 12:23:59.477002    7420 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:23:59.477037    7420 start.go:728] Will try again in 5 seconds ...
	I0419 12:24:04.479132    7420 start.go:360] acquireMachinesLock for addons-040000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:24:04.479599    7420 start.go:364] duration metric: took 374.417µs to acquireMachinesLock for "addons-040000"
	I0419 12:24:04.479787    7420 start.go:93] Provisioning new machine with config: &{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:24:04.480128    7420 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:24:04.489782    7420 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0419 12:24:04.539480    7420 start.go:159] libmachine.API.Create for "addons-040000" (driver="qemu2")
	I0419 12:24:04.539524    7420 client.go:168] LocalClient.Create starting
	I0419 12:24:04.539629    7420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:24:04.539692    7420 main.go:141] libmachine: Decoding PEM data...
	I0419 12:24:04.539717    7420 main.go:141] libmachine: Parsing certificate...
	I0419 12:24:04.539805    7420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:24:04.539857    7420 main.go:141] libmachine: Decoding PEM data...
	I0419 12:24:04.539867    7420 main.go:141] libmachine: Parsing certificate...
	I0419 12:24:04.540375    7420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:24:04.673628    7420 main.go:141] libmachine: Creating SSH key...
	I0419 12:24:04.946832    7420 main.go:141] libmachine: Creating Disk image...
	I0419 12:24:04.946841    7420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:24:04.947091    7420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2
	I0419 12:24:04.960650    7420 main.go:141] libmachine: STDOUT: 
	I0419 12:24:04.960680    7420 main.go:141] libmachine: STDERR: 
	I0419 12:24:04.960738    7420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2 +20000M
	I0419 12:24:04.972015    7420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:24:04.972034    7420 main.go:141] libmachine: STDERR: 
	I0419 12:24:04.972057    7420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2
	I0419 12:24:04.972063    7420 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:24:04.972098    7420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:46:2a:d8:8c:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/addons-040000/disk.qcow2
	I0419 12:24:04.973815    7420 main.go:141] libmachine: STDOUT: 
	I0419 12:24:04.973831    7420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:24:04.973843    7420 client.go:171] duration metric: took 434.322875ms to LocalClient.Create
	I0419 12:24:06.976061    7420 start.go:128] duration metric: took 2.495946167s to createHost
	I0419 12:24:06.976131    7420 start.go:83] releasing machines lock for "addons-040000", held for 2.496517292s
	W0419 12:24:06.976534    7420 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:24:06.984977    7420 out.go:177] 
	W0419 12:24:06.992078    7420 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:24:06.992119    7420 out.go:239] * 
	* 
	W0419 12:24:06.994912    7420 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:24:07.002965    7420 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.31s)

                                                
                                    
x
+
TestCertOptions (10.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-712000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-712000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.763082292s)

                                                
                                                
-- stdout --
	* [cert-options-712000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-712000" primary control-plane node in "cert-options-712000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-712000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-712000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-712000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-712000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.32ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-712000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-712000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-712000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-712000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-712000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-712000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.233833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-712000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-712000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-712000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-712000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-712000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-04-19 12:36:04.976285 -0700 PDT m=+774.538749876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-712000 -n cert-options-712000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-712000 -n cert-options-712000: exit status 7 (32.477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-712000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-712000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-712000
--- FAIL: TestCertOptions (10.06s)

                                                
                                    
x
+
TestCertExpiration (195.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-455000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-455000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.683024166s)

                                                
                                                
-- stdout --
	* [cert-expiration-455000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-455000" primary control-plane node in "cert-expiration-455000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-455000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-455000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-455000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-455000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.190284208s)

                                                
                                                
-- stdout --
	* [cert-expiration-455000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-455000" primary control-plane node in "cert-expiration-455000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-455000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-455000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-455000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-455000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-455000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-455000" primary control-plane node in "cert-expiration-455000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-455000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-455000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-455000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-04-19 12:39:04.969164 -0700 PDT m=+954.535663709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-455000 -n cert-expiration-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-455000 -n cert-expiration-455000: exit status 7 (58.000792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-455000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-455000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-455000
--- FAIL: TestCertExpiration (195.03s)

                                                
                                    
x
+
TestDockerFlags (10s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-060000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-060000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.742262417s)

                                                
                                                
-- stdout --
	* [docker-flags-060000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-060000" primary control-plane node in "docker-flags-060000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:35:45.079869    9038 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:35:45.080009    9038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:45.080013    9038 out.go:304] Setting ErrFile to fd 2...
	I0419 12:35:45.080016    9038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:45.080130    9038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:35:45.081163    9038 out.go:298] Setting JSON to false
	I0419 12:35:45.097203    9038 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5716,"bootTime":1713549629,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:35:45.097275    9038 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:35:45.102766    9038 out.go:177] * [docker-flags-060000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:35:45.109855    9038 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:35:45.109908    9038 notify.go:220] Checking for updates...
	I0419 12:35:45.113785    9038 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:35:45.116900    9038 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:35:45.119830    9038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:35:45.122848    9038 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:35:45.125738    9038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:35:45.129187    9038 config.go:182] Loaded profile config "force-systemd-flag-767000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:35:45.129253    9038 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:35:45.129302    9038 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:35:45.133771    9038 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:35:45.140819    9038 start.go:297] selected driver: qemu2
	I0419 12:35:45.140826    9038 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:35:45.140834    9038 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:35:45.143057    9038 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:35:45.145703    9038 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:35:45.148859    9038 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0419 12:35:45.148902    9038 cni.go:84] Creating CNI manager for ""
	I0419 12:35:45.148910    9038 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:35:45.148916    9038 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:35:45.148945    9038 start.go:340] cluster config:
	{Name:docker-flags-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:35:45.153526    9038 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:35:45.158791    9038 out.go:177] * Starting "docker-flags-060000" primary control-plane node in "docker-flags-060000" cluster
	I0419 12:35:45.162840    9038 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:35:45.162856    9038 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:35:45.162866    9038 cache.go:56] Caching tarball of preloaded images
	I0419 12:35:45.162937    9038 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:35:45.162942    9038 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:35:45.163013    9038 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/docker-flags-060000/config.json ...
	I0419 12:35:45.163023    9038 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/docker-flags-060000/config.json: {Name:mke54fe30b2b3db93e5b75ba05b6e0209b65ec10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:35:45.163430    9038 start.go:360] acquireMachinesLock for docker-flags-060000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:45.163467    9038 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "docker-flags-060000"
	I0419 12:35:45.163478    9038 start.go:93] Provisioning new machine with config: &{Name:docker-flags-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:45.163509    9038 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:45.172845    9038 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:45.190470    9038 start.go:159] libmachine.API.Create for "docker-flags-060000" (driver="qemu2")
	I0419 12:35:45.190499    9038 client.go:168] LocalClient.Create starting
	I0419 12:35:45.190563    9038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:45.190591    9038 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:45.190599    9038 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:45.190642    9038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:45.190665    9038 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:45.190671    9038 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:45.191164    9038 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:45.314563    9038 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:45.384229    9038 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:45.384234    9038 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:45.384402    9038 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2
	I0419 12:35:45.396852    9038 main.go:141] libmachine: STDOUT: 
	I0419 12:35:45.396883    9038 main.go:141] libmachine: STDERR: 
	I0419 12:35:45.396931    9038 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2 +20000M
	I0419 12:35:45.407710    9038 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:45.407732    9038 main.go:141] libmachine: STDERR: 
	I0419 12:35:45.407750    9038 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2
	I0419 12:35:45.407754    9038 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:45.407794    9038 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:99:fe:bd:17:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2
	I0419 12:35:45.409545    9038 main.go:141] libmachine: STDOUT: 
	I0419 12:35:45.409561    9038 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:45.409581    9038 client.go:171] duration metric: took 219.083083ms to LocalClient.Create
	I0419 12:35:47.411763    9038 start.go:128] duration metric: took 2.24828025s to createHost
	I0419 12:35:47.411824    9038 start.go:83] releasing machines lock for "docker-flags-060000", held for 2.248394792s
	W0419 12:35:47.411927    9038 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:47.429887    9038 out.go:177] * Deleting "docker-flags-060000" in qemu2 ...
	W0419 12:35:47.447425    9038 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:47.447450    9038 start.go:728] Will try again in 5 seconds ...
	I0419 12:35:52.449548    9038 start.go:360] acquireMachinesLock for docker-flags-060000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:52.449897    9038 start.go:364] duration metric: took 259.084µs to acquireMachinesLock for "docker-flags-060000"
	I0419 12:35:52.449968    9038 start.go:93] Provisioning new machine with config: &{Name:docker-flags-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:52.450248    9038 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:52.457920    9038 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:52.504927    9038 start.go:159] libmachine.API.Create for "docker-flags-060000" (driver="qemu2")
	I0419 12:35:52.504982    9038 client.go:168] LocalClient.Create starting
	I0419 12:35:52.505084    9038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:52.505160    9038 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:52.505177    9038 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:52.505241    9038 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:52.505285    9038 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:52.505302    9038 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:52.505883    9038 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:52.655613    9038 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:52.711758    9038 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:52.711763    9038 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:52.711933    9038 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2
	I0419 12:35:52.724505    9038 main.go:141] libmachine: STDOUT: 
	I0419 12:35:52.724526    9038 main.go:141] libmachine: STDERR: 
	I0419 12:35:52.724582    9038 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2 +20000M
	I0419 12:35:52.735626    9038 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:52.735648    9038 main.go:141] libmachine: STDERR: 
	I0419 12:35:52.735662    9038 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2
	I0419 12:35:52.735666    9038 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:52.735709    9038 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:dc:a8:1c:83:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/docker-flags-060000/disk.qcow2
	I0419 12:35:52.737490    9038 main.go:141] libmachine: STDOUT: 
	I0419 12:35:52.737508    9038 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:52.737529    9038 client.go:171] duration metric: took 232.547209ms to LocalClient.Create
	I0419 12:35:54.739666    9038 start.go:128] duration metric: took 2.289441125s to createHost
	I0419 12:35:54.739721    9038 start.go:83] releasing machines lock for "docker-flags-060000", held for 2.289854208s
	W0419 12:35:54.740101    9038 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:54.752734    9038 out.go:177] 
	W0419 12:35:54.759970    9038 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:35:54.760009    9038 out.go:239] * 
	* 
	W0419 12:35:54.763128    9038 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:35:54.775720    9038 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-060000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.028375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-060000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-060000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-060000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-060000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-060000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-060000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-060000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.50575ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-060000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-060000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-060000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-060000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-060000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-060000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-19 12:35:54.921889 -0700 PDT m=+764.484128751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-060000 -n docker-flags-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-060000 -n docker-flags-060000: exit status 7 (31.414042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-060000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-060000
--- FAIL: TestDockerFlags (10.00s)

                                                
                                    
x
+
TestForceSystemdFlag (11.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-767000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-767000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.912585917s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-767000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-767000" primary control-plane node in "force-systemd-flag-767000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-767000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:35:38.986453    9013 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:35:38.986578    9013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:38.986581    9013 out.go:304] Setting ErrFile to fd 2...
	I0419 12:35:38.986583    9013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:38.986711    9013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:35:38.987806    9013 out.go:298] Setting JSON to false
	I0419 12:35:39.003901    9013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5709,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:35:39.003961    9013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:35:39.008824    9013 out.go:177] * [force-systemd-flag-767000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:35:39.015646    9013 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:35:39.019780    9013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:35:39.015718    9013 notify.go:220] Checking for updates...
	I0419 12:35:39.024675    9013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:35:39.027792    9013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:35:39.030859    9013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:35:39.033705    9013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:35:39.037064    9013 config.go:182] Loaded profile config "force-systemd-env-617000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:35:39.037131    9013 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:35:39.037175    9013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:35:39.041763    9013 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:35:39.048702    9013 start.go:297] selected driver: qemu2
	I0419 12:35:39.048716    9013 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:35:39.048726    9013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:35:39.050939    9013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:35:39.054729    9013 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:35:39.057845    9013 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 12:35:39.057884    9013 cni.go:84] Creating CNI manager for ""
	I0419 12:35:39.057894    9013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:35:39.057899    9013 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:35:39.057937    9013 start.go:340] cluster config:
	{Name:force-systemd-flag-767000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:35:39.062265    9013 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:35:39.069541    9013 out.go:177] * Starting "force-systemd-flag-767000" primary control-plane node in "force-systemd-flag-767000" cluster
	I0419 12:35:39.073709    9013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:35:39.073722    9013 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:35:39.073731    9013 cache.go:56] Caching tarball of preloaded images
	I0419 12:35:39.073786    9013 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:35:39.073791    9013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:35:39.073839    9013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/force-systemd-flag-767000/config.json ...
	I0419 12:35:39.073850    9013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/force-systemd-flag-767000/config.json: {Name:mk233038f62b2c57c91a17ca489659417947951a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:35:39.074065    9013 start.go:360] acquireMachinesLock for force-systemd-flag-767000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:39.074099    9013 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "force-systemd-flag-767000"
	I0419 12:35:39.074112    9013 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:39.074143    9013 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:39.081702    9013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:39.098662    9013 start.go:159] libmachine.API.Create for "force-systemd-flag-767000" (driver="qemu2")
	I0419 12:35:39.098690    9013 client.go:168] LocalClient.Create starting
	I0419 12:35:39.098759    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:39.098795    9013 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:39.098807    9013 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:39.098858    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:39.098884    9013 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:39.098892    9013 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:39.099244    9013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:39.224073    9013 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:39.308108    9013 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:39.308116    9013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:39.308306    9013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2
	I0419 12:35:39.320932    9013 main.go:141] libmachine: STDOUT: 
	I0419 12:35:39.320953    9013 main.go:141] libmachine: STDERR: 
	I0419 12:35:39.321007    9013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2 +20000M
	I0419 12:35:39.331778    9013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:39.331795    9013 main.go:141] libmachine: STDERR: 
	I0419 12:35:39.331806    9013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2
	I0419 12:35:39.331810    9013 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:39.331839    9013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:65:9e:c8:b4:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2
	I0419 12:35:39.333613    9013 main.go:141] libmachine: STDOUT: 
	I0419 12:35:39.333628    9013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:39.333647    9013 client.go:171] duration metric: took 234.957917ms to LocalClient.Create
	I0419 12:35:41.335778    9013 start.go:128] duration metric: took 2.261664792s to createHost
	I0419 12:35:41.335841    9013 start.go:83] releasing machines lock for "force-systemd-flag-767000", held for 2.261783167s
	W0419 12:35:41.335919    9013 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:41.347340    9013 out.go:177] * Deleting "force-systemd-flag-767000" in qemu2 ...
	W0419 12:35:41.373518    9013 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:41.373548    9013 start.go:728] Will try again in 5 seconds ...
	I0419 12:35:46.375694    9013 start.go:360] acquireMachinesLock for force-systemd-flag-767000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:47.411998    9013 start.go:364] duration metric: took 1.036215667s to acquireMachinesLock for "force-systemd-flag-767000"
	I0419 12:35:47.412162    9013 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-767000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-767000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:47.412469    9013 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:47.420896    9013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:47.470808    9013 start.go:159] libmachine.API.Create for "force-systemd-flag-767000" (driver="qemu2")
	I0419 12:35:47.470862    9013 client.go:168] LocalClient.Create starting
	I0419 12:35:47.471026    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:47.471082    9013 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:47.471100    9013 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:47.471165    9013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:47.471209    9013 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:47.471223    9013 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:47.471837    9013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:47.626552    9013 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:47.791831    9013 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:47.791842    9013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:47.792042    9013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2
	I0419 12:35:47.804936    9013 main.go:141] libmachine: STDOUT: 
	I0419 12:35:47.804956    9013 main.go:141] libmachine: STDERR: 
	I0419 12:35:47.805016    9013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2 +20000M
	I0419 12:35:47.816202    9013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:47.816226    9013 main.go:141] libmachine: STDERR: 
	I0419 12:35:47.816238    9013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2
	I0419 12:35:47.816242    9013 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:47.816278    9013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:18:a3:f8:37:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-flag-767000/disk.qcow2
	I0419 12:35:47.818019    9013 main.go:141] libmachine: STDOUT: 
	I0419 12:35:47.818035    9013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:47.818050    9013 client.go:171] duration metric: took 347.191ms to LocalClient.Create
	I0419 12:35:49.820286    9013 start.go:128] duration metric: took 2.407814666s to createHost
	I0419 12:35:49.820376    9013 start.go:83] releasing machines lock for "force-systemd-flag-767000", held for 2.408375459s
	W0419 12:35:49.820715    9013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-767000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-767000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:49.833354    9013 out.go:177] 
	W0419 12:35:49.840487    9013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:35:49.840516    9013 out.go:239] * 
	* 
	W0419 12:35:49.843172    9013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:35:49.853183    9013 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-767000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-767000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-767000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.610291ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-767000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-767000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-767000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-19 12:35:49.955445 -0700 PDT m=+759.517572792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-767000 -n force-systemd-flag-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-767000 -n force-systemd-flag-767000: exit status 7 (35.140042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-767000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-767000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-767000
--- FAIL: TestForceSystemdFlag (11.14s)

                                                
                                    
x
+
TestForceSystemdEnv (10.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-617000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-617000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.157763833s)

                                                
                                                
-- stdout --
	* [force-systemd-env-617000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-617000" primary control-plane node in "force-systemd-env-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:35:34.711788    8993 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:35:34.711944    8993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:34.711948    8993 out.go:304] Setting ErrFile to fd 2...
	I0419 12:35:34.711951    8993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:35:34.712098    8993 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:35:34.713150    8993 out.go:298] Setting JSON to false
	I0419 12:35:34.730078    8993 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5705,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:35:34.730151    8993 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:35:34.734561    8993 out.go:177] * [force-systemd-env-617000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:35:34.745500    8993 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:35:34.741456    8993 notify.go:220] Checking for updates...
	I0419 12:35:34.753538    8993 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:35:34.757500    8993 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:35:34.760491    8993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:35:34.763514    8993 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:35:34.766525    8993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0419 12:35:34.768338    8993 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:35:34.768390    8993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:35:34.772520    8993 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:35:34.779371    8993 start.go:297] selected driver: qemu2
	I0419 12:35:34.779377    8993 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:35:34.779383    8993 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:35:34.781573    8993 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:35:34.784533    8993 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:35:34.787573    8993 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 12:35:34.787599    8993 cni.go:84] Creating CNI manager for ""
	I0419 12:35:34.787605    8993 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:35:34.787615    8993 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:35:34.787652    8993 start.go:340] cluster config:
	{Name:force-systemd-env-617000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:35:34.791683    8993 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:35:34.798472    8993 out.go:177] * Starting "force-systemd-env-617000" primary control-plane node in "force-systemd-env-617000" cluster
	I0419 12:35:34.802486    8993 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:35:34.802498    8993 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:35:34.802502    8993 cache.go:56] Caching tarball of preloaded images
	I0419 12:35:34.802550    8993 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:35:34.802555    8993 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:35:34.802596    8993 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/force-systemd-env-617000/config.json ...
	I0419 12:35:34.802606    8993 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/force-systemd-env-617000/config.json: {Name:mk389160af0699bdfd2745d4eda36e5ccfb00795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:35:34.802881    8993 start.go:360] acquireMachinesLock for force-systemd-env-617000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:34.802910    8993 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "force-systemd-env-617000"
	I0419 12:35:34.802920    8993 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-617000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:34.802947    8993 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:34.810563    8993 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:34.825695    8993 start.go:159] libmachine.API.Create for "force-systemd-env-617000" (driver="qemu2")
	I0419 12:35:34.825728    8993 client.go:168] LocalClient.Create starting
	I0419 12:35:34.825796    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:34.825823    8993 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:34.825835    8993 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:34.825871    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:34.825893    8993 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:34.825899    8993 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:34.826340    8993 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:34.949420    8993 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:35.100926    8993 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:35.100935    8993 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:35.101143    8993 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2
	I0419 12:35:35.114466    8993 main.go:141] libmachine: STDOUT: 
	I0419 12:35:35.114487    8993 main.go:141] libmachine: STDERR: 
	I0419 12:35:35.114548    8993 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2 +20000M
	I0419 12:35:35.125980    8993 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:35.126006    8993 main.go:141] libmachine: STDERR: 
	I0419 12:35:35.126023    8993 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2
	I0419 12:35:35.126046    8993 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:35.126072    8993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:1b:29:65:2e:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2
	I0419 12:35:35.127855    8993 main.go:141] libmachine: STDOUT: 
	I0419 12:35:35.127870    8993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:35.127891    8993 client.go:171] duration metric: took 302.163792ms to LocalClient.Create
	I0419 12:35:37.130042    8993 start.go:128] duration metric: took 2.327115167s to createHost
	I0419 12:35:37.130134    8993 start.go:83] releasing machines lock for "force-systemd-env-617000", held for 2.32726725s
	W0419 12:35:37.130179    8993 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:37.137419    8993 out.go:177] * Deleting "force-systemd-env-617000" in qemu2 ...
	W0419 12:35:37.159937    8993 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:37.159969    8993 start.go:728] Will try again in 5 seconds ...
	I0419 12:35:42.162048    8993 start.go:360] acquireMachinesLock for force-systemd-env-617000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:35:42.162627    8993 start.go:364] duration metric: took 481.25µs to acquireMachinesLock for "force-systemd-env-617000"
	I0419 12:35:42.162808    8993 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-617000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:35:42.163106    8993 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:35:42.172759    8993 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0419 12:35:42.223990    8993 start.go:159] libmachine.API.Create for "force-systemd-env-617000" (driver="qemu2")
	I0419 12:35:42.224061    8993 client.go:168] LocalClient.Create starting
	I0419 12:35:42.224179    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:35:42.224236    8993 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:42.224250    8993 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:42.224314    8993 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:35:42.224357    8993 main.go:141] libmachine: Decoding PEM data...
	I0419 12:35:42.224372    8993 main.go:141] libmachine: Parsing certificate...
	I0419 12:35:42.224855    8993 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:42.386738    8993 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:42.767891    8993 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:42.767903    8993 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:42.768156    8993 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2
	I0419 12:35:42.781410    8993 main.go:141] libmachine: STDOUT: 
	I0419 12:35:42.781433    8993 main.go:141] libmachine: STDERR: 
	I0419 12:35:42.781480    8993 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2 +20000M
	I0419 12:35:42.792418    8993 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:42.792436    8993 main.go:141] libmachine: STDERR: 
	I0419 12:35:42.792451    8993 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2
	I0419 12:35:42.792454    8993 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:42.792484    8993 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a7:4e:38:96:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/force-systemd-env-617000/disk.qcow2
	I0419 12:35:42.794268    8993 main.go:141] libmachine: STDOUT: 
	I0419 12:35:42.794284    8993 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:42.794298    8993 client.go:171] duration metric: took 570.24375ms to LocalClient.Create
	I0419 12:35:44.796513    8993 start.go:128] duration metric: took 2.63338875s to createHost
	I0419 12:35:44.796587    8993 start.go:83] releasing machines lock for "force-systemd-env-617000", held for 2.63396825s
	W0419 12:35:44.796922    8993 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:44.806360    8993 out.go:177] 
	W0419 12:35:44.810241    8993 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:35:44.810319    8993 out.go:239] * 
	* 
	W0419 12:35:44.813380    8993 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:35:44.821308    8993 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-617000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-617000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-617000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.7085ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-617000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-617000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-617000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-19 12:35:44.918092 -0700 PDT m=+754.480107126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-617000 -n force-systemd-env-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-617000 -n force-systemd-env-617000: exit status 7 (35.957458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-617000
--- FAIL: TestForceSystemdEnv (10.37s)

                                                
                                    
x
+
TestErrorSpam/setup (9.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-449000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-449000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 --driver=qemu2 : exit status 80 (9.786966958s)

                                                
                                                
-- stdout --
	* [nospam-449000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-449000" primary control-plane node in "nospam-449000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-449000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-449000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-449000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-449000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18669
- KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-449000" primary control-plane node in "nospam-449000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-449000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.79s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-663000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-663000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.791439875s)

                                                
                                                
-- stdout --
	* [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-663000" primary control-plane node in "functional-663000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-663000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51023 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51023 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51023 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-663000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18669
- KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-663000" primary control-plane node in "functional-663000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-663000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51023 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51023 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51023 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (69.889708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.86s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-663000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-663000 --alsologtostderr -v=8: exit status 80 (5.189950375s)

                                                
                                                
-- stdout --
	* [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-663000" primary control-plane node in "functional-663000" cluster
	* Restarting existing qemu2 VM for "functional-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:24:36.366984    7560 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:24:36.367133    7560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:24:36.367137    7560 out.go:304] Setting ErrFile to fd 2...
	I0419 12:24:36.367139    7560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:24:36.367277    7560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:24:36.368286    7560 out.go:298] Setting JSON to false
	I0419 12:24:36.384345    7560 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5047,"bootTime":1713549629,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:24:36.384405    7560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:24:36.389067    7560 out.go:177] * [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:24:36.395965    7560 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:24:36.396038    7560 notify.go:220] Checking for updates...
	I0419 12:24:36.402902    7560 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:24:36.405909    7560 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:24:36.410941    7560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:24:36.414001    7560 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:24:36.416973    7560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:24:36.420234    7560 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:24:36.420277    7560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:24:36.424936    7560 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:24:36.431928    7560 start.go:297] selected driver: qemu2
	I0419 12:24:36.431934    7560 start.go:901] validating driver "qemu2" against &{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:24:36.432015    7560 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:24:36.434346    7560 cni.go:84] Creating CNI manager for ""
	I0419 12:24:36.434364    7560 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:24:36.434405    7560 start.go:340] cluster config:
	{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:24:36.438748    7560 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:24:36.446905    7560 out.go:177] * Starting "functional-663000" primary control-plane node in "functional-663000" cluster
	I0419 12:24:36.450901    7560 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:24:36.450915    7560 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:24:36.450922    7560 cache.go:56] Caching tarball of preloaded images
	I0419 12:24:36.450984    7560 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:24:36.450991    7560 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:24:36.451039    7560 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/functional-663000/config.json ...
	I0419 12:24:36.451509    7560 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:24:36.451540    7560 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "functional-663000"
	I0419 12:24:36.451550    7560 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:24:36.451556    7560 fix.go:54] fixHost starting: 
	I0419 12:24:36.451671    7560 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
	W0419 12:24:36.451680    7560 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:24:36.458945    7560 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
	I0419 12:24:36.463030    7560 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
	I0419 12:24:36.465114    7560 main.go:141] libmachine: STDOUT: 
	I0419 12:24:36.465138    7560 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:24:36.465180    7560 fix.go:56] duration metric: took 13.623666ms for fixHost
	I0419 12:24:36.465185    7560 start.go:83] releasing machines lock for "functional-663000", held for 13.64125ms
	W0419 12:24:36.465192    7560 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:24:36.465221    7560 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:24:36.465226    7560 start.go:728] Will try again in 5 seconds ...
	I0419 12:24:41.467332    7560 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:24:41.467778    7560 start.go:364] duration metric: took 347.125µs to acquireMachinesLock for "functional-663000"
	I0419 12:24:41.467905    7560 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:24:41.467923    7560 fix.go:54] fixHost starting: 
	I0419 12:24:41.468550    7560 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
	W0419 12:24:41.468581    7560 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:24:41.472109    7560 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
	I0419 12:24:41.480082    7560 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
	I0419 12:24:41.489206    7560 main.go:141] libmachine: STDOUT: 
	I0419 12:24:41.489276    7560 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:24:41.489348    7560 fix.go:56] duration metric: took 21.422042ms for fixHost
	I0419 12:24:41.489369    7560 start.go:83] releasing machines lock for "functional-663000", held for 21.565875ms
	W0419 12:24:41.489495    7560 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:24:41.496899    7560 out.go:177] 
	W0419 12:24:41.501005    7560 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:24:41.501032    7560 out.go:239] * 
	* 
	W0419 12:24:41.503644    7560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:24:41.510941    7560 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-663000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.191718708s for "functional-663000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (68.080833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.388208ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-663000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.331333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-663000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-663000 get po -A: exit status 1 (26.207458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-663000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-663000\n"*: args "kubectl --context functional-663000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-663000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.505042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl images: exit status 83 (42.824625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.8375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-663000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.893375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.880417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-663000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 kubectl -- --context functional-663000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 kubectl -- --context functional-663000 get pods: exit status 1 (601.620875ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-663000
	* no server found for cluster "functional-663000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-663000 kubectl -- --context functional-663000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (33.948708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-663000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-663000 get pods: exit status 1 (913.955666ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-663000
	* no server found for cluster "functional-663000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-663000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (31.423875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-663000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-663000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.190229167s)

                                                
                                                
-- stdout --
	* [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-663000" primary control-plane node in "functional-663000" cluster
	* Restarting existing qemu2 VM for "functional-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-663000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.190714792s for "functional-663000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (69.906583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-663000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-663000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.692667ms)

                                                
                                                
** stderr ** 
	error: context "functional-663000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-663000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.327083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 logs: exit status 83 (90.860542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | -p download-only-668000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| delete  | -p download-only-668000                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| start   | -o=json --download-only                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | -p download-only-907000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| delete  | -p download-only-907000                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| delete  | -p download-only-668000                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| delete  | -p download-only-907000                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| start   | --download-only -p                                                       | binary-mirror-839000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | binary-mirror-839000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:50990                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-839000                                                  | binary-mirror-839000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| addons  | enable dashboard -p                                                      | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | addons-040000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | addons-040000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-040000 --wait=true                                             | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-040000                                                         | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	| start   | -p nospam-449000 -n=1 --memory=2250 --wait=false                         | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-449000                                                         | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | minikube-local-cache-test:functional-663000                              |                      |         |                |                     |                     |
	| cache   | functional-663000 cache delete                                           | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | minikube-local-cache-test:functional-663000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	| ssh     | functional-663000 ssh sudo                                               | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-663000                                                        | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-663000 ssh                                                    | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-663000 cache reload                                           | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	| ssh     | functional-663000 ssh                                                    | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-663000 kubectl --                                             | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | --context functional-663000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 12:24:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 12:24:46.697219    7642 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:24:46.697354    7642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:24:46.697356    7642 out.go:304] Setting ErrFile to fd 2...
	I0419 12:24:46.697358    7642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:24:46.697478    7642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:24:46.698515    7642 out.go:298] Setting JSON to false
	I0419 12:24:46.714433    7642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5057,"bootTime":1713549629,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:24:46.714496    7642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:24:46.720280    7642 out.go:177] * [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:24:46.729268    7642 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:24:46.733300    7642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:24:46.729293    7642 notify.go:220] Checking for updates...
	I0419 12:24:46.740237    7642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:24:46.743277    7642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:24:46.746206    7642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:24:46.749230    7642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:24:46.752575    7642 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:24:46.752627    7642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:24:46.757195    7642 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:24:46.764276    7642 start.go:297] selected driver: qemu2
	I0419 12:24:46.764281    7642 start.go:901] validating driver "qemu2" against &{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:24:46.764342    7642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:24:46.766740    7642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:24:46.766783    7642 cni.go:84] Creating CNI manager for ""
	I0419 12:24:46.766790    7642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:24:46.766829    7642 start.go:340] cluster config:
	{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:24:46.771209    7642 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:24:46.779200    7642 out.go:177] * Starting "functional-663000" primary control-plane node in "functional-663000" cluster
	I0419 12:24:46.782242    7642 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:24:46.782257    7642 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:24:46.782268    7642 cache.go:56] Caching tarball of preloaded images
	I0419 12:24:46.782329    7642 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:24:46.782334    7642 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:24:46.782409    7642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/functional-663000/config.json ...
	I0419 12:24:46.782892    7642 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:24:46.782925    7642 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "functional-663000"
	I0419 12:24:46.782933    7642 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:24:46.782937    7642 fix.go:54] fixHost starting: 
	I0419 12:24:46.783057    7642 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
	W0419 12:24:46.783064    7642 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:24:46.791084    7642 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
	I0419 12:24:46.795269    7642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
	I0419 12:24:46.797336    7642 main.go:141] libmachine: STDOUT: 
	I0419 12:24:46.797353    7642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:24:46.797381    7642 fix.go:56] duration metric: took 14.444125ms for fixHost
	I0419 12:24:46.797385    7642 start.go:83] releasing machines lock for "functional-663000", held for 14.457625ms
	W0419 12:24:46.797389    7642 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:24:46.797429    7642 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:24:46.797433    7642 start.go:728] Will try again in 5 seconds ...
	I0419 12:24:51.799569    7642 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:24:51.799946    7642 start.go:364] duration metric: took 289.125µs to acquireMachinesLock for "functional-663000"
	I0419 12:24:51.800059    7642 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:24:51.800069    7642 fix.go:54] fixHost starting: 
	I0419 12:24:51.800811    7642 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
	W0419 12:24:51.800831    7642 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:24:51.809203    7642 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
	I0419 12:24:51.813415    7642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
	I0419 12:24:51.822512    7642 main.go:141] libmachine: STDOUT: 
	I0419 12:24:51.822573    7642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:24:51.822663    7642 fix.go:56] duration metric: took 22.594334ms for fixHost
	I0419 12:24:51.822681    7642 start.go:83] releasing machines lock for "functional-663000", held for 22.722292ms
	W0419 12:24:51.822853    7642 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:24:51.830197    7642 out.go:177] 
	W0419 12:24:51.833244    7642 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:24:51.833266    7642 out.go:239] * 
	W0419 12:24:51.835921    7642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:24:51.844150    7642 out.go:177] 
	
	
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-663000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | -p download-only-668000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-668000                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| start   | -o=json --download-only                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | -p download-only-907000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-907000                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-668000                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-907000                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| start   | --download-only -p                                                       | binary-mirror-839000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | binary-mirror-839000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:50990                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-839000                                                  | binary-mirror-839000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| addons  | enable dashboard -p                                                      | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | addons-040000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | addons-040000                                                            |                      |         |                |                     |                     |
| start   | -p addons-040000 --wait=true                                             | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-040000                                                         | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| start   | -p nospam-449000 -n=1 --memory=2250 --wait=false                         | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-449000                                                         | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | minikube-local-cache-test:functional-663000                              |                      |         |                |                     |                     |
| cache   | functional-663000 cache delete                                           | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | minikube-local-cache-test:functional-663000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| ssh     | functional-663000 ssh sudo                                               | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-663000                                                        | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-663000 ssh                                                    | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-663000 cache reload                                           | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| ssh     | functional-663000 ssh                                                    | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-663000 kubectl --                                             | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --context functional-663000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/19 12:24:46
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0419 12:24:46.697219    7642 out.go:291] Setting OutFile to fd 1 ...
I0419 12:24:46.697354    7642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:24:46.697356    7642 out.go:304] Setting ErrFile to fd 2...
I0419 12:24:46.697358    7642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:24:46.697478    7642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:24:46.698515    7642 out.go:298] Setting JSON to false
I0419 12:24:46.714433    7642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5057,"bootTime":1713549629,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0419 12:24:46.714496    7642 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0419 12:24:46.720280    7642 out.go:177] * [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
I0419 12:24:46.729268    7642 out.go:177]   - MINIKUBE_LOCATION=18669
I0419 12:24:46.733300    7642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
I0419 12:24:46.729293    7642 notify.go:220] Checking for updates...
I0419 12:24:46.740237    7642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0419 12:24:46.743277    7642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0419 12:24:46.746206    7642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
I0419 12:24:46.749230    7642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0419 12:24:46.752575    7642 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:24:46.752627    7642 driver.go:392] Setting default libvirt URI to qemu:///system
I0419 12:24:46.757195    7642 out.go:177] * Using the qemu2 driver based on existing profile
I0419 12:24:46.764276    7642 start.go:297] selected driver: qemu2
I0419 12:24:46.764281    7642 start.go:901] validating driver "qemu2" against &{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0419 12:24:46.764342    7642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0419 12:24:46.766740    7642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0419 12:24:46.766783    7642 cni.go:84] Creating CNI manager for ""
I0419 12:24:46.766790    7642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0419 12:24:46.766829    7642 start.go:340] cluster config:
{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0419 12:24:46.771209    7642 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0419 12:24:46.779200    7642 out.go:177] * Starting "functional-663000" primary control-plane node in "functional-663000" cluster
I0419 12:24:46.782242    7642 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0419 12:24:46.782257    7642 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0419 12:24:46.782268    7642 cache.go:56] Caching tarball of preloaded images
I0419 12:24:46.782329    7642 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0419 12:24:46.782334    7642 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0419 12:24:46.782409    7642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/functional-663000/config.json ...
I0419 12:24:46.782892    7642 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0419 12:24:46.782925    7642 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "functional-663000"
I0419 12:24:46.782933    7642 start.go:96] Skipping create...Using existing machine configuration
I0419 12:24:46.782937    7642 fix.go:54] fixHost starting: 
I0419 12:24:46.783057    7642 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
W0419 12:24:46.783064    7642 fix.go:138] unexpected machine state, will restart: <nil>
I0419 12:24:46.791084    7642 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
I0419 12:24:46.795269    7642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
I0419 12:24:46.797336    7642 main.go:141] libmachine: STDOUT: 
I0419 12:24:46.797353    7642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0419 12:24:46.797381    7642 fix.go:56] duration metric: took 14.444125ms for fixHost
I0419 12:24:46.797385    7642 start.go:83] releasing machines lock for "functional-663000", held for 14.457625ms
W0419 12:24:46.797389    7642 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0419 12:24:46.797429    7642 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0419 12:24:46.797433    7642 start.go:728] Will try again in 5 seconds ...
I0419 12:24:51.799569    7642 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0419 12:24:51.799946    7642 start.go:364] duration metric: took 289.125µs to acquireMachinesLock for "functional-663000"
I0419 12:24:51.800059    7642 start.go:96] Skipping create...Using existing machine configuration
I0419 12:24:51.800069    7642 fix.go:54] fixHost starting: 
I0419 12:24:51.800811    7642 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
W0419 12:24:51.800831    7642 fix.go:138] unexpected machine state, will restart: <nil>
I0419 12:24:51.809203    7642 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
I0419 12:24:51.813415    7642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
I0419 12:24:51.822512    7642 main.go:141] libmachine: STDOUT: 
I0419 12:24:51.822573    7642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0419 12:24:51.822663    7642 fix.go:56] duration metric: took 22.594334ms for fixHost
I0419 12:24:51.822681    7642 start.go:83] releasing machines lock for "functional-663000", held for 22.722292ms
W0419 12:24:51.822853    7642 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0419 12:24:51.830197    7642 out.go:177] 
W0419 12:24:51.833244    7642 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0419 12:24:51.833266    7642 out.go:239] * 
W0419 12:24:51.835921    7642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0419 12:24:51.844150    7642 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd875811941/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | -p download-only-668000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-668000                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| start   | -o=json --download-only                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | -p download-only-907000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-907000                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-668000                                                  | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| delete  | -p download-only-907000                                                  | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| start   | --download-only -p                                                       | binary-mirror-839000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | binary-mirror-839000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:50990                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-839000                                                  | binary-mirror-839000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
| addons  | enable dashboard -p                                                      | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | addons-040000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | addons-040000                                                            |                      |         |                |                     |                     |
| start   | -p addons-040000 --wait=true                                             | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-040000                                                         | addons-040000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| start   | -p nospam-449000 -n=1 --memory=2250 --wait=false                         | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-449000 --log_dir                                                  | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-449000                                                         | nospam-449000        | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-663000 cache add                                              | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | minikube-local-cache-test:functional-663000                              |                      |         |                |                     |                     |
| cache   | functional-663000 cache delete                                           | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | minikube-local-cache-test:functional-663000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| ssh     | functional-663000 ssh sudo                                               | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-663000                                                        | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-663000 ssh                                                    | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-663000 cache reload                                           | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
| ssh     | functional-663000 ssh                                                    | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT | 19 Apr 24 12:24 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-663000 kubectl --                                             | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --context functional-663000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-663000                                                     | functional-663000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:24 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/04/19 12:24:46
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0419 12:24:46.697219    7642 out.go:291] Setting OutFile to fd 1 ...
I0419 12:24:46.697354    7642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:24:46.697356    7642 out.go:304] Setting ErrFile to fd 2...
I0419 12:24:46.697358    7642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:24:46.697478    7642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:24:46.698515    7642 out.go:298] Setting JSON to false
I0419 12:24:46.714433    7642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5057,"bootTime":1713549629,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0419 12:24:46.714496    7642 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0419 12:24:46.720280    7642 out.go:177] * [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
I0419 12:24:46.729268    7642 out.go:177]   - MINIKUBE_LOCATION=18669
I0419 12:24:46.733300    7642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
I0419 12:24:46.729293    7642 notify.go:220] Checking for updates...
I0419 12:24:46.740237    7642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0419 12:24:46.743277    7642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0419 12:24:46.746206    7642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
I0419 12:24:46.749230    7642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0419 12:24:46.752575    7642 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:24:46.752627    7642 driver.go:392] Setting default libvirt URI to qemu:///system
I0419 12:24:46.757195    7642 out.go:177] * Using the qemu2 driver based on existing profile
I0419 12:24:46.764276    7642 start.go:297] selected driver: qemu2
I0419 12:24:46.764281    7642 start.go:901] validating driver "qemu2" against &{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0419 12:24:46.764342    7642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0419 12:24:46.766740    7642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0419 12:24:46.766783    7642 cni.go:84] Creating CNI manager for ""
I0419 12:24:46.766790    7642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0419 12:24:46.766829    7642 start.go:340] cluster config:
{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0419 12:24:46.771209    7642 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0419 12:24:46.779200    7642 out.go:177] * Starting "functional-663000" primary control-plane node in "functional-663000" cluster
I0419 12:24:46.782242    7642 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0419 12:24:46.782257    7642 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0419 12:24:46.782268    7642 cache.go:56] Caching tarball of preloaded images
I0419 12:24:46.782329    7642 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0419 12:24:46.782334    7642 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0419 12:24:46.782409    7642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/functional-663000/config.json ...
I0419 12:24:46.782892    7642 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0419 12:24:46.782925    7642 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "functional-663000"
I0419 12:24:46.782933    7642 start.go:96] Skipping create...Using existing machine configuration
I0419 12:24:46.782937    7642 fix.go:54] fixHost starting: 
I0419 12:24:46.783057    7642 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
W0419 12:24:46.783064    7642 fix.go:138] unexpected machine state, will restart: <nil>
I0419 12:24:46.791084    7642 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
I0419 12:24:46.795269    7642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
I0419 12:24:46.797336    7642 main.go:141] libmachine: STDOUT: 
I0419 12:24:46.797353    7642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0419 12:24:46.797381    7642 fix.go:56] duration metric: took 14.444125ms for fixHost
I0419 12:24:46.797385    7642 start.go:83] releasing machines lock for "functional-663000", held for 14.457625ms
W0419 12:24:46.797389    7642 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0419 12:24:46.797429    7642 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0419 12:24:46.797433    7642 start.go:728] Will try again in 5 seconds ...
I0419 12:24:51.799569    7642 start.go:360] acquireMachinesLock for functional-663000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0419 12:24:51.799946    7642 start.go:364] duration metric: took 289.125µs to acquireMachinesLock for "functional-663000"
I0419 12:24:51.800059    7642 start.go:96] Skipping create...Using existing machine configuration
I0419 12:24:51.800069    7642 fix.go:54] fixHost starting: 
I0419 12:24:51.800811    7642 fix.go:112] recreateIfNeeded on functional-663000: state=Stopped err=<nil>
W0419 12:24:51.800831    7642 fix.go:138] unexpected machine state, will restart: <nil>
I0419 12:24:51.809203    7642 out.go:177] * Restarting existing qemu2 VM for "functional-663000" ...
I0419 12:24:51.813415    7642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:aa:d7:c4:7b:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/functional-663000/disk.qcow2
I0419 12:24:51.822512    7642 main.go:141] libmachine: STDOUT: 
I0419 12:24:51.822573    7642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0419 12:24:51.822663    7642 fix.go:56] duration metric: took 22.594334ms for fixHost
I0419 12:24:51.822681    7642 start.go:83] releasing machines lock for "functional-663000", held for 22.722292ms
W0419 12:24:51.822853    7642 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0419 12:24:51.830197    7642 out.go:177] 
W0419 12:24:51.833244    7642 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0419 12:24:51.833266    7642 out.go:239] * 
W0419 12:24:51.835921    7642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0419 12:24:51.844150    7642 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-663000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-663000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.10975ms)

                                                
                                                
** stderr ** 
	error: context "functional-663000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-663000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-663000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-663000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-663000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-663000 --alsologtostderr -v=1] stderr:
I0419 12:25:34.708130    7974 out.go:291] Setting OutFile to fd 1 ...
I0419 12:25:34.708562    7974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:34.708565    7974 out.go:304] Setting ErrFile to fd 2...
I0419 12:25:34.708568    7974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:34.708711    7974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:25:34.708927    7974 mustload.go:65] Loading cluster: functional-663000
I0419 12:25:34.709125    7974 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:34.712302    7974 out.go:177] * The control-plane node functional-663000 host is not running: state=Stopped
I0419 12:25:34.716242    7974 out.go:177]   To start a cluster, run: "minikube start -p functional-663000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (43.745542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 status: exit status 7 (32.492375ms)

                                                
                                                
-- stdout --
	functional-663000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-663000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.218417ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-663000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 status -o json: exit status 7 (31.6225ms)

                                                
                                                
-- stdout --
	{"Name":"functional-663000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-663000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.018958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-663000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-663000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.926375ms)

                                                
                                                
** stderr ** 
	error: context "functional-663000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-663000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-663000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-663000 describe po hello-node-connect: exit status 1 (26.025583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-663000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-663000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-663000 logs -l app=hello-node-connect: exit status 1 (26.675542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-663000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-663000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-663000 describe svc hello-node-connect: exit status 1 (26.434333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-663000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.683792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-663000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.637333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "echo hello": exit status 83 (47.072417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n"*. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "cat /etc/hostname": exit status 83 (47.965583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-663000"- but got *"* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n"*. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (35.562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.311333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-663000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.207583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-663000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-663000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cp functional-663000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1790268631/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 cp functional-663000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1790268631/001/cp-test.txt: exit status 83 (47.100959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-663000 cp functional-663000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1790268631/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.240792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1790268631/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.620375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-663000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (45.404458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-663000 ssh -n functional-663000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-663000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-663000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7304/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/test/nested/copy/7304/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/test/nested/copy/7304/hosts": exit status 83 (44.02975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/test/nested/copy/7304/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-663000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-663000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (31.792584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7304.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/7304.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/7304.pem": exit status 83 (43.255583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7304.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo cat /etc/ssl/certs/7304.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7304.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-663000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-663000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7304.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /usr/share/ca-certificates/7304.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /usr/share/ca-certificates/7304.pem": exit status 83 (48.700166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7304.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo cat /usr/share/ca-certificates/7304.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7304.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-663000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-663000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (50.657458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-663000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-663000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/73042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/73042.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/73042.pem": exit status 83 (42.666875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/73042.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo cat /etc/ssl/certs/73042.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/73042.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-663000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-663000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/73042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /usr/share/ca-certificates/73042.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /usr/share/ca-certificates/73042.pem": exit status 83 (42.318416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/73042.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo cat /usr/share/ca-certificates/73042.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/73042.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-663000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-663000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.464458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-663000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-663000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (32.142375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-663000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-663000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.495875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-663000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-663000 -n functional-663000: exit status 7 (33.742959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo systemctl is-active crio": exit status 83 (46.878167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 version -o=json --components: exit status 83 (43.910542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-663000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-663000 image ls --format short --alsologtostderr:
I0419 12:25:35.128285    7989 out.go:291] Setting OutFile to fd 1 ...
I0419 12:25:35.128437    7989 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.128441    7989 out.go:304] Setting ErrFile to fd 2...
I0419 12:25:35.128443    7989 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.128587    7989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:25:35.128985    7989 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:35.129040    7989 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-663000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-663000 image ls --format table --alsologtostderr:
I0419 12:25:35.360131    8001 out.go:291] Setting OutFile to fd 1 ...
I0419 12:25:35.360288    8001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.360291    8001 out.go:304] Setting ErrFile to fd 2...
I0419 12:25:35.360293    8001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.360420    8001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:25:35.360842    8001 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:35.360899    8001 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-663000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-663000 image ls --format json --alsologtostderr:
I0419 12:25:35.322366    7999 out.go:291] Setting OutFile to fd 1 ...
I0419 12:25:35.322525    7999 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.322528    7999 out.go:304] Setting ErrFile to fd 2...
I0419 12:25:35.322530    7999 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.322673    7999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:25:35.323071    7999 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:35.323128    7999 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-663000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-663000 image ls --format yaml --alsologtostderr:
I0419 12:25:35.166783    7991 out.go:291] Setting OutFile to fd 1 ...
I0419 12:25:35.166945    7991 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.166948    7991 out.go:304] Setting ErrFile to fd 2...
I0419 12:25:35.166950    7991 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.167084    7991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:25:35.167502    7991 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:35.167562    7991 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh pgrep buildkitd: exit status 83 (44.254833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image build -t localhost/my-image:functional-663000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-663000 image build -t localhost/my-image:functional-663000 testdata/build --alsologtostderr:
I0419 12:25:35.246089    7995 out.go:291] Setting OutFile to fd 1 ...
I0419 12:25:35.246560    7995 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.246564    7995 out.go:304] Setting ErrFile to fd 2...
I0419 12:25:35.246570    7995 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:25:35.246752    7995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:25:35.247133    7995 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:35.247569    7995 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:25:35.247787    7995 build_images.go:133] succeeded building to: 
I0419 12:25:35.247791    7995 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls
functional_test.go:442: expected "localhost/my-image:functional-663000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-663000 docker-env) && out/minikube-darwin-arm64 status -p functional-663000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-663000 docker-env) && out/minikube-darwin-arm64 status -p functional-663000": exit status 1 (47.870584ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2: exit status 83 (43.639167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:25:34.992445    7983 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:25:34.992855    7983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:34.992860    7983 out.go:304] Setting ErrFile to fd 2...
	I0419 12:25:34.992862    7983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:34.992998    7983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:25:34.993215    7983 mustload.go:65] Loading cluster: functional-663000
	I0419 12:25:34.993409    7983 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:25:34.997821    7983 out.go:177] * The control-plane node functional-663000 host is not running: state=Stopped
	I0419 12:25:35.001799    7983 out.go:177]   To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2: exit status 83 (44.633542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:25:35.083144    7987 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:25:35.083273    7987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:35.083276    7987 out.go:304] Setting ErrFile to fd 2...
	I0419 12:25:35.083278    7987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:35.083401    7987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:25:35.083683    7987 mustload.go:65] Loading cluster: functional-663000
	I0419 12:25:35.083882    7987 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:25:35.088817    7987 out.go:177] * The control-plane node functional-663000 host is not running: state=Stopped
	I0419 12:25:35.092847    7987 out.go:177]   To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2: exit status 83 (46.650042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:25:35.036107    7985 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:25:35.036273    7985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:35.036276    7985 out.go:304] Setting ErrFile to fd 2...
	I0419 12:25:35.036278    7985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:35.036394    7985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:25:35.036605    7985 mustload.go:65] Loading cluster: functional-663000
	I0419 12:25:35.036801    7985 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:25:35.040937    7985 out.go:177] * The control-plane node functional-663000 host is not running: state=Stopped
	I0419 12:25:35.047848    7985 out.go:177]   To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-663000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-663000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-663000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.195584ms)

                                                
                                                
** stderr ** 
	error: context "functional-663000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-663000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 service list: exit status 83 (44.924333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-663000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 service list -o json: exit status 83 (45.581417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-663000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 service --namespace=default --https --url hello-node: exit status 83 (44.983542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-663000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 service hello-node --url --format={{.IP}}: exit status 83 (44.858833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-663000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 service hello-node --url: exit status 83 (44.794ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-663000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test.go:1565: failed to parse "* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"": parse "* The control-plane node functional-663000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-663000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0419 12:24:53.697520    7764 out.go:291] Setting OutFile to fd 1 ...
I0419 12:24:53.697716    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:24:53.697724    7764 out.go:304] Setting ErrFile to fd 2...
I0419 12:24:53.697726    7764 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:24:53.697882    7764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:24:53.698109    7764 mustload.go:65] Loading cluster: functional-663000
I0419 12:24:53.698336    7764 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:24:53.703943    7764 out.go:177] * The control-plane node functional-663000 host is not running: state=Stopped
I0419 12:24:53.713303    7764 out.go:177]   To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
stdout: * The control-plane node functional-663000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-663000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7765: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-663000": client config: context "functional-663000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-663000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-663000 get svc nginx-svc: exit status 1 (71.176375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-663000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-663000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image load --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-663000 image load --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr: (1.29302975s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-663000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image load --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-663000 image load --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr: (1.310735916s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-663000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.295358375s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-663000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image load --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-663000 image load --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr: (1.156446416s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-663000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image save gcr.io/google-containers/addon-resizer:functional-663000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-663000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035740959s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 13 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (32.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (32.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-527000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-527000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.986917917s)

                                                
                                                
-- stdout --
	* [ha-527000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-527000" primary control-plane node in "ha-527000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-527000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:27:39.853995    8055 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:27:39.854138    8055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:27:39.854141    8055 out.go:304] Setting ErrFile to fd 2...
	I0419 12:27:39.854144    8055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:27:39.854572    8055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:27:39.856143    8055 out.go:298] Setting JSON to false
	I0419 12:27:39.872544    8055 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5230,"bootTime":1713549629,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:27:39.872615    8055 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:27:39.878414    8055 out.go:177] * [ha-527000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:27:39.884461    8055 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:27:39.884511    8055 notify.go:220] Checking for updates...
	I0419 12:27:39.891355    8055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:27:39.894338    8055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:27:39.897409    8055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:27:39.900357    8055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:27:39.903377    8055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:27:39.906498    8055 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:27:39.909272    8055 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:27:39.916397    8055 start.go:297] selected driver: qemu2
	I0419 12:27:39.916403    8055 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:27:39.916409    8055 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:27:39.918663    8055 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:27:39.919959    8055 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:27:39.923500    8055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:27:39.923552    8055 cni.go:84] Creating CNI manager for ""
	I0419 12:27:39.923557    8055 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 12:27:39.923561    8055 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 12:27:39.923604    8055 start.go:340] cluster config:
	{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:27:39.928188    8055 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:27:39.936335    8055 out.go:177] * Starting "ha-527000" primary control-plane node in "ha-527000" cluster
	I0419 12:27:39.940358    8055 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:27:39.940372    8055 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:27:39.940380    8055 cache.go:56] Caching tarball of preloaded images
	I0419 12:27:39.940436    8055 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:27:39.940442    8055 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:27:39.940664    8055 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/ha-527000/config.json ...
	I0419 12:27:39.940682    8055 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/ha-527000/config.json: {Name:mk397fe7645eab187ed8d77a7ead188647a59eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:27:39.941083    8055 start.go:360] acquireMachinesLock for ha-527000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:27:39.941119    8055 start.go:364] duration metric: took 29.041µs to acquireMachinesLock for "ha-527000"
	I0419 12:27:39.941131    8055 start.go:93] Provisioning new machine with config: &{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:27:39.941164    8055 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:27:39.950351    8055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:27:39.968169    8055 start.go:159] libmachine.API.Create for "ha-527000" (driver="qemu2")
	I0419 12:27:39.968205    8055 client.go:168] LocalClient.Create starting
	I0419 12:27:39.968271    8055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:27:39.968302    8055 main.go:141] libmachine: Decoding PEM data...
	I0419 12:27:39.968317    8055 main.go:141] libmachine: Parsing certificate...
	I0419 12:27:39.968363    8055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:27:39.968403    8055 main.go:141] libmachine: Decoding PEM data...
	I0419 12:27:39.968414    8055 main.go:141] libmachine: Parsing certificate...
	I0419 12:27:39.968819    8055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:27:40.089677    8055 main.go:141] libmachine: Creating SSH key...
	I0419 12:27:40.244325    8055 main.go:141] libmachine: Creating Disk image...
	I0419 12:27:40.244331    8055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:27:40.244530    8055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:27:40.257505    8055 main.go:141] libmachine: STDOUT: 
	I0419 12:27:40.257528    8055 main.go:141] libmachine: STDERR: 
	I0419 12:27:40.257578    8055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2 +20000M
	I0419 12:27:40.268691    8055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:27:40.268710    8055 main.go:141] libmachine: STDERR: 
	I0419 12:27:40.268727    8055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:27:40.268731    8055 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:27:40.268762    8055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:29:f2:07:18:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:27:40.270545    8055 main.go:141] libmachine: STDOUT: 
	I0419 12:27:40.270563    8055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:27:40.270585    8055 client.go:171] duration metric: took 302.381792ms to LocalClient.Create
	I0419 12:27:42.272724    8055 start.go:128] duration metric: took 2.331589375s to createHost
	I0419 12:27:42.272768    8055 start.go:83] releasing machines lock for "ha-527000", held for 2.331692916s
	W0419 12:27:42.272834    8055 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:27:42.286099    8055 out.go:177] * Deleting "ha-527000" in qemu2 ...
	W0419 12:27:42.310602    8055 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:27:42.310637    8055 start.go:728] Will try again in 5 seconds ...
	I0419 12:27:47.312732    8055 start.go:360] acquireMachinesLock for ha-527000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:27:47.313241    8055 start.go:364] duration metric: took 402.5µs to acquireMachinesLock for "ha-527000"
	I0419 12:27:47.313402    8055 start.go:93] Provisioning new machine with config: &{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:27:47.313687    8055 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:27:47.325538    8055 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:27:47.377931    8055 start.go:159] libmachine.API.Create for "ha-527000" (driver="qemu2")
	I0419 12:27:47.377981    8055 client.go:168] LocalClient.Create starting
	I0419 12:27:47.378069    8055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:27:47.378131    8055 main.go:141] libmachine: Decoding PEM data...
	I0419 12:27:47.378149    8055 main.go:141] libmachine: Parsing certificate...
	I0419 12:27:47.378208    8055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:27:47.378249    8055 main.go:141] libmachine: Decoding PEM data...
	I0419 12:27:47.378264    8055 main.go:141] libmachine: Parsing certificate...
	I0419 12:27:47.378901    8055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:27:47.513807    8055 main.go:141] libmachine: Creating SSH key...
	I0419 12:27:47.736942    8055 main.go:141] libmachine: Creating Disk image...
	I0419 12:27:47.736951    8055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:27:47.737190    8055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:27:47.750698    8055 main.go:141] libmachine: STDOUT: 
	I0419 12:27:47.750720    8055 main.go:141] libmachine: STDERR: 
	I0419 12:27:47.750778    8055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2 +20000M
	I0419 12:27:47.761915    8055 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:27:47.761936    8055 main.go:141] libmachine: STDERR: 
	I0419 12:27:47.761951    8055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:27:47.761954    8055 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:27:47.761984    8055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:72:6d:54:ee:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:27:47.763821    8055 main.go:141] libmachine: STDOUT: 
	I0419 12:27:47.763840    8055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:27:47.763852    8055 client.go:171] duration metric: took 385.876208ms to LocalClient.Create
	I0419 12:27:49.765983    8055 start.go:128] duration metric: took 2.452302375s to createHost
	I0419 12:27:49.766041    8055 start.go:83] releasing machines lock for "ha-527000", held for 2.452827792s
	W0419 12:27:49.766469    8055 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:27:49.777103    8055 out.go:177] 
	W0419 12:27:49.781251    8055 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:27:49.781282    8055 out.go:239] * 
	* 
	W0419 12:27:49.783923    8055 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:27:49.794100    8055 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-527000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (68.629833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (119.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.714375ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-527000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- rollout status deployment/busybox: exit status 1 (59.792542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.07175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.268875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.504375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.765875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.533042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.2555ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.305459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.8535ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.447958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.974083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.280584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.794ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.570083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.036458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.337791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (31.907375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (119.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-527000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.074958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-527000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.692958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-527000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-527000 -v=7 --alsologtostderr: exit status 83 (44.216459ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-527000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-527000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:49.422080    8151 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:49.422427    8151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.422433    8151 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:49.422436    8151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.422579    8151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:49.422884    8151 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:49.423072    8151 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:49.428187    8151 out.go:177] * The control-plane node ha-527000 host is not running: state=Stopped
	I0419 12:29:49.431956    8151 out.go:177]   To start a cluster, run: "minikube start -p ha-527000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-527000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.328834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-527000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-527000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.824542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-527000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-527000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-527000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.387958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-527000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-527000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.293083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status --output json -v=7 --alsologtostderr: exit status 7 (31.813ms)

                                                
                                                
-- stdout --
	{"Name":"ha-527000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:49.660840    8164 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:49.661055    8164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.661060    8164 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:49.661063    8164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.661183    8164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:49.661296    8164 out.go:298] Setting JSON to true
	I0419 12:29:49.661307    8164 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:49.661372    8164 notify.go:220] Checking for updates...
	I0419 12:29:49.661500    8164 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:49.661506    8164 status.go:255] checking status of ha-527000 ...
	I0419 12:29:49.661717    8164 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:49.661720    8164 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:49.661722    8164 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-527000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.021084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.687333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:49.725770    8168 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:49.726027    8168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.726030    8168 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:49.726033    8168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.726160    8168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:49.726423    8168 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:49.726607    8168 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:49.731202    8168 out.go:177] 
	W0419 12:29:49.734070    8168 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0419 12:29:49.734074    8168 out.go:239] * 
	* 
	W0419 12:29:49.735919    8168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:29:49.739150    8168 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-527000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (32.078666ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:49.774392    8170 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:49.774536    8170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.774539    8170 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:49.774542    8170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.774677    8170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:49.774811    8170 out.go:298] Setting JSON to false
	I0419 12:29:49.774822    8170 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:49.774884    8170 notify.go:220] Checking for updates...
	I0419 12:29:49.775068    8170 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:49.775076    8170 status.go:255] checking status of ha-527000 ...
	I0419 12:29:49.775273    8170 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:49.775277    8170 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:49.775279    8170 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.547917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-527000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (31.673875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.822ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:49.942224    8180 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:49.942461    8180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.942464    8180 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:49.942467    8180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.942592    8180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:49.942827    8180 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:49.943002    8180 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:49.947613    8180 out.go:177] 
	W0419 12:29:49.950649    8180 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0419 12:29:49.950653    8180 out.go:239] * 
	* 
	W0419 12:29:49.952583    8180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:29:49.954076    8180 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0419 12:29:49.942224    8180 out.go:291] Setting OutFile to fd 1 ...
I0419 12:29:49.942461    8180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:29:49.942464    8180 out.go:304] Setting ErrFile to fd 2...
I0419 12:29:49.942467    8180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:29:49.942592    8180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:29:49.942827    8180 mustload.go:65] Loading cluster: ha-527000
I0419 12:29:49.943002    8180 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:29:49.947613    8180 out.go:177] 
W0419 12:29:49.950649    8180 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0419 12:29:49.950653    8180 out.go:239] * 
* 
W0419 12:29:49.952583    8180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0419 12:29:49.954076    8180 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-527000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (32.508333ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:49.989374    8182 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:49.989513    8182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.989516    8182 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:49.989520    8182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:49.989651    8182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:49.989771    8182 out.go:298] Setting JSON to false
	I0419 12:29:49.989784    8182 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:49.989837    8182 notify.go:220] Checking for updates...
	I0419 12:29:49.989995    8182 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:49.990000    8182 status.go:255] checking status of ha-527000 ...
	I0419 12:29:49.990226    8182 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:49.990229    8182 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:49.990232    8182 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (77.281459ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:51.025128    8184 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:51.025338    8184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:51.025342    8184 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:51.025345    8184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:51.025508    8184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:51.025669    8184 out.go:298] Setting JSON to false
	I0419 12:29:51.025684    8184 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:51.025722    8184 notify.go:220] Checking for updates...
	I0419 12:29:51.025939    8184 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:51.025950    8184 status.go:255] checking status of ha-527000 ...
	I0419 12:29:51.026214    8184 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:51.026219    8184 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:51.026222    8184 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (76.318667ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:52.888795    8186 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:52.888995    8186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:52.888999    8186 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:52.889002    8186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:52.889152    8186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:52.889299    8186 out.go:298] Setting JSON to false
	I0419 12:29:52.889314    8186 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:52.889351    8186 notify.go:220] Checking for updates...
	I0419 12:29:52.889570    8186 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:52.889578    8186 status.go:255] checking status of ha-527000 ...
	I0419 12:29:52.889854    8186 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:52.889859    8186 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:52.889862    8186 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (76.004417ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:55.975272    8188 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:55.975475    8188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:55.975479    8188 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:55.975482    8188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:55.975665    8188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:55.975813    8188 out.go:298] Setting JSON to false
	I0419 12:29:55.975835    8188 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:55.975866    8188 notify.go:220] Checking for updates...
	I0419 12:29:55.976098    8188 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:55.976105    8188 status.go:255] checking status of ha-527000 ...
	I0419 12:29:55.976366    8188 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:55.976371    8188 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:55.976378    8188 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (77.490416ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:29:58.745268    8190 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:29:58.745470    8190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:58.745474    8190 out.go:304] Setting ErrFile to fd 2...
	I0419 12:29:58.745477    8190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:29:58.745663    8190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:29:58.745816    8190 out.go:298] Setting JSON to false
	I0419 12:29:58.745831    8190 mustload.go:65] Loading cluster: ha-527000
	I0419 12:29:58.745866    8190 notify.go:220] Checking for updates...
	I0419 12:29:58.746068    8190 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:29:58.746075    8190 status.go:255] checking status of ha-527000 ...
	I0419 12:29:58.746374    8190 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:29:58.746378    8190 status.go:343] host is not running, skipping remaining checks
	I0419 12:29:58.746381    8190 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (76.840125ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:03.611370    8204 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:03.611546    8204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:03.611551    8204 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:03.611554    8204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:03.611742    8204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:03.611904    8204 out.go:298] Setting JSON to false
	I0419 12:30:03.611919    8204 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:03.611952    8204 notify.go:220] Checking for updates...
	I0419 12:30:03.612172    8204 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:03.612180    8204 status.go:255] checking status of ha-527000 ...
	I0419 12:30:03.612439    8204 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:30:03.612444    8204 status.go:343] host is not running, skipping remaining checks
	I0419 12:30:03.612447    8204 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (76.605833ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:08.630919    8207 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:08.631127    8207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:08.631132    8207 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:08.631136    8207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:08.631316    8207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:08.631486    8207 out.go:298] Setting JSON to false
	I0419 12:30:08.631502    8207 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:08.631543    8207 notify.go:220] Checking for updates...
	I0419 12:30:08.631763    8207 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:08.631770    8207 status.go:255] checking status of ha-527000 ...
	I0419 12:30:08.632030    8207 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:30:08.632034    8207 status.go:343] host is not running, skipping remaining checks
	I0419 12:30:08.632038    8207 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (78.331875ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:22.367515    8213 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:22.367718    8213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:22.367722    8213 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:22.367725    8213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:22.367883    8213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:22.368042    8213 out.go:298] Setting JSON to false
	I0419 12:30:22.368057    8213 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:22.368097    8213 notify.go:220] Checking for updates...
	I0419 12:30:22.368330    8213 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:22.368336    8213 status.go:255] checking status of ha-527000 ...
	I0419 12:30:22.368616    8213 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:30:22.368621    8213 status.go:343] host is not running, skipping remaining checks
	I0419 12:30:22.368624    8213 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (77.080583ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:39.784752    8218 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:39.784972    8218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:39.784976    8218 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:39.784979    8218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:39.785147    8218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:39.785298    8218 out.go:298] Setting JSON to false
	I0419 12:30:39.785311    8218 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:39.785343    8218 notify.go:220] Checking for updates...
	I0419 12:30:39.785575    8218 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:39.785582    8218 status.go:255] checking status of ha-527000 ...
	I0419 12:30:39.785881    8218 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:30:39.785885    8218 status.go:343] host is not running, skipping remaining checks
	I0419 12:30:39.785888    8218 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (34.240458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (49.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-527000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-527000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (31.660833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-527000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-527000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-527000 -v=7 --alsologtostderr: (1.909113792s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-527000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-527000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.241961375s)

                                                
                                                
-- stdout --
	* [ha-527000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-527000" primary control-plane node in "ha-527000" cluster
	* Restarting existing qemu2 VM for "ha-527000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-527000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:41.934702    8242 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:41.934875    8242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:41.934880    8242 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:41.934883    8242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:41.935029    8242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:41.936200    8242 out.go:298] Setting JSON to false
	I0419 12:30:41.956405    8242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5412,"bootTime":1713549629,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:30:41.956470    8242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:30:41.961210    8242 out.go:177] * [ha-527000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:30:41.973150    8242 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:30:41.969174    8242 notify.go:220] Checking for updates...
	I0419 12:30:41.981121    8242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:30:41.988126    8242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:30:41.991132    8242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:30:41.992501    8242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:30:41.999136    8242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:30:42.003339    8242 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:42.003397    8242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:30:42.007121    8242 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:30:42.013096    8242 start.go:297] selected driver: qemu2
	I0419 12:30:42.013102    8242 start.go:901] validating driver "qemu2" against &{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:30:42.013158    8242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:30:42.015660    8242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:30:42.015721    8242 cni.go:84] Creating CNI manager for ""
	I0419 12:30:42.015726    8242 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 12:30:42.015788    8242 start.go:340] cluster config:
	{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:30:42.020400    8242 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:30:42.028140    8242 out.go:177] * Starting "ha-527000" primary control-plane node in "ha-527000" cluster
	I0419 12:30:42.032156    8242 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:30:42.032172    8242 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:30:42.032182    8242 cache.go:56] Caching tarball of preloaded images
	I0419 12:30:42.032239    8242 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:30:42.032245    8242 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:30:42.032298    8242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/ha-527000/config.json ...
	I0419 12:30:42.032771    8242 start.go:360] acquireMachinesLock for ha-527000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:30:42.032809    8242 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "ha-527000"
	I0419 12:30:42.032819    8242 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:30:42.032823    8242 fix.go:54] fixHost starting: 
	I0419 12:30:42.032945    8242 fix.go:112] recreateIfNeeded on ha-527000: state=Stopped err=<nil>
	W0419 12:30:42.032955    8242 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:30:42.041075    8242 out.go:177] * Restarting existing qemu2 VM for "ha-527000" ...
	I0419 12:30:42.045185    8242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:72:6d:54:ee:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:30:42.047422    8242 main.go:141] libmachine: STDOUT: 
	I0419 12:30:42.047449    8242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:30:42.047480    8242 fix.go:56] duration metric: took 14.65525ms for fixHost
	I0419 12:30:42.047485    8242 start.go:83] releasing machines lock for "ha-527000", held for 14.671333ms
	W0419 12:30:42.047492    8242 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:30:42.047531    8242 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:30:42.047536    8242 start.go:728] Will try again in 5 seconds ...
	I0419 12:30:47.049658    8242 start.go:360] acquireMachinesLock for ha-527000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:30:47.050174    8242 start.go:364] duration metric: took 401.041µs to acquireMachinesLock for "ha-527000"
	I0419 12:30:47.050341    8242 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:30:47.050362    8242 fix.go:54] fixHost starting: 
	I0419 12:30:47.051091    8242 fix.go:112] recreateIfNeeded on ha-527000: state=Stopped err=<nil>
	W0419 12:30:47.051117    8242 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:30:47.055646    8242 out.go:177] * Restarting existing qemu2 VM for "ha-527000" ...
	I0419 12:30:47.061818    8242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:72:6d:54:ee:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:30:47.071875    8242 main.go:141] libmachine: STDOUT: 
	I0419 12:30:47.071936    8242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:30:47.072015    8242 fix.go:56] duration metric: took 21.657375ms for fixHost
	I0419 12:30:47.072032    8242 start.go:83] releasing machines lock for "ha-527000", held for 21.836ms
	W0419 12:30:47.072195    8242 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-527000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-527000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:30:47.078522    8242 out.go:177] 
	W0419 12:30:47.082632    8242 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:30:47.082660    8242 out.go:239] * 
	* 
	W0419 12:30:47.085240    8242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:30:47.095663    8242 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-527000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-527000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (34.755541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.569125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-527000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-527000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:47.245618    8256 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:47.246043    8256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:47.246050    8256 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:47.246052    8256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:47.246201    8256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:47.246445    8256 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:47.246652    8256 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:47.250268    8256 out.go:177] * The control-plane node ha-527000 host is not running: state=Stopped
	I0419 12:30:47.253000    8256 out.go:177]   To start a cluster, run: "minikube start -p ha-527000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-527000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (32.380167ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:47.286885    8258 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:47.287291    8258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:47.287296    8258 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:47.287299    8258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:47.287493    8258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:47.287637    8258 out.go:298] Setting JSON to false
	I0419 12:30:47.287649    8258 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:47.287957    8258 notify.go:220] Checking for updates...
	I0419 12:30:47.288133    8258 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:47.288144    8258 status.go:255] checking status of ha-527000 ...
	I0419 12:30:47.288354    8258 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:30:47.288358    8258 status.go:343] host is not running, skipping remaining checks
	I0419 12:30:47.288361    8258 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.338875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-527000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (31.956459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-527000 stop -v=7 --alsologtostderr: (3.128312541s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr: exit status 7 (72.525291ms)

                                                
                                                
-- stdout --
	ha-527000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:50.625666    8287 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:50.625871    8287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:50.625876    8287 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:50.625879    8287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:50.626046    8287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:50.626203    8287 out.go:298] Setting JSON to false
	I0419 12:30:50.626217    8287 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:50.626258    8287 notify.go:220] Checking for updates...
	I0419 12:30:50.626476    8287 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:50.626484    8287 status.go:255] checking status of ha-527000 ...
	I0419 12:30:50.626750    8287 status.go:330] ha-527000 host status = "Stopped" (err=<nil>)
	I0419 12:30:50.626755    8287 status.go:343] host is not running, skipping remaining checks
	I0419 12:30:50.626761    8287 status.go:257] ha-527000 status: &{Name:ha-527000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-527000 status -v=7 --alsologtostderr": ha-527000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (34.364584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-527000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-527000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.187871667s)

                                                
                                                
-- stdout --
	* [ha-527000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-527000" primary control-plane node in "ha-527000" cluster
	* Restarting existing qemu2 VM for "ha-527000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-527000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:50.692711    8291 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:50.692834    8291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:50.692838    8291 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:50.692840    8291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:50.692965    8291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:50.693915    8291 out.go:298] Setting JSON to false
	I0419 12:30:50.709984    8291 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5421,"bootTime":1713549629,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:30:50.710048    8291 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:30:50.714981    8291 out.go:177] * [ha-527000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:30:50.720907    8291 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:30:50.720957    8291 notify.go:220] Checking for updates...
	I0419 12:30:50.724891    8291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:30:50.727860    8291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:30:50.730926    8291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:30:50.733866    8291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:30:50.736862    8291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:30:50.740079    8291 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:50.740324    8291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:30:50.744885    8291 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:30:50.751876    8291 start.go:297] selected driver: qemu2
	I0419 12:30:50.751885    8291 start.go:901] validating driver "qemu2" against &{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:30:50.751937    8291 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:30:50.754159    8291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:30:50.754199    8291 cni.go:84] Creating CNI manager for ""
	I0419 12:30:50.754207    8291 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 12:30:50.754256    8291 start.go:340] cluster config:
	{Name:ha-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-527000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:30:50.758407    8291 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:30:50.765840    8291 out.go:177] * Starting "ha-527000" primary control-plane node in "ha-527000" cluster
	I0419 12:30:50.769882    8291 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:30:50.769898    8291 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:30:50.769910    8291 cache.go:56] Caching tarball of preloaded images
	I0419 12:30:50.769972    8291 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:30:50.769979    8291 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:30:50.770032    8291 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/ha-527000/config.json ...
	I0419 12:30:50.770471    8291 start.go:360] acquireMachinesLock for ha-527000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:30:50.770498    8291 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "ha-527000"
	I0419 12:30:50.770508    8291 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:30:50.770514    8291 fix.go:54] fixHost starting: 
	I0419 12:30:50.770622    8291 fix.go:112] recreateIfNeeded on ha-527000: state=Stopped err=<nil>
	W0419 12:30:50.770630    8291 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:30:50.778874    8291 out.go:177] * Restarting existing qemu2 VM for "ha-527000" ...
	I0419 12:30:50.782906    8291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:72:6d:54:ee:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:30:50.784898    8291 main.go:141] libmachine: STDOUT: 
	I0419 12:30:50.784925    8291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:30:50.784957    8291 fix.go:56] duration metric: took 14.443458ms for fixHost
	I0419 12:30:50.784962    8291 start.go:83] releasing machines lock for "ha-527000", held for 14.459625ms
	W0419 12:30:50.784968    8291 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:30:50.784998    8291 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:30:50.785003    8291 start.go:728] Will try again in 5 seconds ...
	I0419 12:30:55.787063    8291 start.go:360] acquireMachinesLock for ha-527000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:30:55.787409    8291 start.go:364] duration metric: took 277.792µs to acquireMachinesLock for "ha-527000"
	I0419 12:30:55.787524    8291 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:30:55.787541    8291 fix.go:54] fixHost starting: 
	I0419 12:30:55.788217    8291 fix.go:112] recreateIfNeeded on ha-527000: state=Stopped err=<nil>
	W0419 12:30:55.788242    8291 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:30:55.800379    8291 out.go:177] * Restarting existing qemu2 VM for "ha-527000" ...
	I0419 12:30:55.804947    8291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:72:6d:54:ee:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/ha-527000/disk.qcow2
	I0419 12:30:55.813924    8291 main.go:141] libmachine: STDOUT: 
	I0419 12:30:55.814000    8291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:30:55.814068    8291 fix.go:56] duration metric: took 26.523666ms for fixHost
	I0419 12:30:55.814088    8291 start.go:83] releasing machines lock for "ha-527000", held for 26.660708ms
	W0419 12:30:55.814272    8291 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-527000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-527000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:30:55.822772    8291 out.go:177] 
	W0419 12:30:55.825807    8291 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:30:55.825823    8291 out.go:239] * 
	* 
	W0419 12:30:55.827989    8291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:30:55.835793    8291 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-527000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (70.226375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-527000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.103458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-527000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-527000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.301666ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-527000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-527000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:30:56.059643    8307 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:30:56.059805    8307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:56.059808    8307 out.go:304] Setting ErrFile to fd 2...
	I0419 12:30:56.059811    8307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:30:56.059955    8307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:30:56.060209    8307 mustload.go:65] Loading cluster: ha-527000
	I0419 12:30:56.060413    8307 config.go:182] Loaded profile config "ha-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:30:56.064479    8307 out.go:177] * The control-plane node ha-527000 host is not running: state=Stopped
	I0419 12:30:56.068496    8307 out.go:177]   To start a cluster, run: "minikube start -p ha-527000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-527000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-527000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-527000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-527000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-527000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-527000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-527000 -n ha-527000: exit status 7 (32.32ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-716000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-716000 --driver=qemu2 : exit status 80 (9.802789s)

                                                
                                                
-- stdout --
	* [image-716000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-716000" primary control-plane node in "image-716000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-716000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-716000 -n image-716000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-716000 -n image-716000: exit status 7 (69.859083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-716000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-112000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-112000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.843532208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29862530-c78b-4954-87fd-675f4fbb4720","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-112000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eadfadea-ae51-45f3-a952-5759406c7d75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18669"}}
	{"specversion":"1.0","id":"1d9284ab-2a40-4e7c-bb0a-6f4c7c77b691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig"}}
	{"specversion":"1.0","id":"51483220-49e7-4cde-bf2f-2a93a19396a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7fe3fcbc-0941-48c4-b5b4-0651b468ed1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7e677f92-eefd-4ee1-b445-b4a049d30592","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube"}}
	{"specversion":"1.0","id":"aee3cbc0-678b-492e-8dcf-2da525ed2c6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68a6b80d-e362-42f9-b8eb-9d23b2e5605d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fc46c26-dc4f-4b2d-9aa6-102a6469fd04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2473f622-b6ac-4995-bfb0-cd1253b9f460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-112000\" primary control-plane node in \"json-output-112000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8eee77b-95cc-417c-83dc-c001cda54143","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a3bd3ebb-2e52-439c-b2e6-0adf3545c903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-112000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1610f94-eeef-49e9-ba12-b61aa27ef866","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"604c1cb9-c4d1-4117-9a00-56de482023fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"6f2d4e3a-bad9-48aa-9718-98d7eae3ab9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-112000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9a1996f9-f274-4d8e-aecb-60f48c2a0308","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"370f88f7-dfd6-44fe-8b9c-b59c8f2f3493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-112000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-112000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-112000 --output=json --user=testUser: exit status 83 (80.538959ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43d6a187-6e8d-4aba-846a-20ff0e4ae2ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-112000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"48a01837-6572-42d5-91b2-f18e6ca7d54f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-112000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-112000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-112000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-112000 --output=json --user=testUser: exit status 83 (48.378292ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-112000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-112000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-112000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-112000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-169000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-169000 --driver=qemu2 : exit status 80 (9.792070625s)

                                                
                                                
-- stdout --
	* [first-169000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-169000" primary control-plane node in "first-169000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-169000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-169000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-19 12:31:28.495616 -0700 PDT m=+498.051883584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-171000 -n second-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-171000 -n second-171000: exit status 85 (80.663833ms)

                                                
                                                
-- stdout --
	* Profile "second-171000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-171000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-171000" host is not running, skipping log retrieval (state="* Profile \"second-171000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-171000\"")
helpers_test.go:175: Cleaning up "second-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-171000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-04-19 12:31:28.806716 -0700 PDT m=+498.362989959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-169000 -n first-169000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-169000 -n first-169000: exit status 7 (32.085667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-169000
--- FAIL: TestMinikubeProfile (10.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-671000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-671000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.804152167s)

                                                
                                                
-- stdout --
	* [mount-start-1-671000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-671000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-671000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-671000 -n mount-start-1-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-671000 -n mount-start-1-671000: exit status 7 (70.805334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-671000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-926000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-926000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.926147875s)

                                                
                                                
-- stdout --
	* [multinode-926000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-926000" primary control-plane node in "multinode-926000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-926000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:31:39.161256    8466 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:31:39.161400    8466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:31:39.161402    8466 out.go:304] Setting ErrFile to fd 2...
	I0419 12:31:39.161405    8466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:31:39.161522    8466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:31:39.162616    8466 out.go:298] Setting JSON to false
	I0419 12:31:39.178554    8466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5470,"bootTime":1713549629,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:31:39.178625    8466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:31:39.184026    8466 out.go:177] * [multinode-926000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:31:39.191881    8466 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:31:39.191993    8466 notify.go:220] Checking for updates...
	I0419 12:31:39.198834    8466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:31:39.201872    8466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:31:39.204825    8466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:31:39.207828    8466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:31:39.210809    8466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:31:39.213990    8466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:31:39.217781    8466 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:31:39.224846    8466 start.go:297] selected driver: qemu2
	I0419 12:31:39.224853    8466 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:31:39.224859    8466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:31:39.227173    8466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:31:39.229896    8466 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:31:39.232901    8466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:31:39.232927    8466 cni.go:84] Creating CNI manager for ""
	I0419 12:31:39.232933    8466 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0419 12:31:39.232937    8466 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 12:31:39.232968    8466 start.go:340] cluster config:
	{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:31:39.237502    8466 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:31:39.244860    8466 out.go:177] * Starting "multinode-926000" primary control-plane node in "multinode-926000" cluster
	I0419 12:31:39.248903    8466 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:31:39.248916    8466 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:31:39.248927    8466 cache.go:56] Caching tarball of preloaded images
	I0419 12:31:39.248983    8466 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:31:39.248989    8466 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:31:39.249181    8466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/multinode-926000/config.json ...
	I0419 12:31:39.249193    8466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/multinode-926000/config.json: {Name:mk81674915afd595e60fcc6ff821827b5d5a10da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:31:39.249548    8466 start.go:360] acquireMachinesLock for multinode-926000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:31:39.249582    8466 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "multinode-926000"
	I0419 12:31:39.249593    8466 start.go:93] Provisioning new machine with config: &{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:31:39.249630    8466 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:31:39.257834    8466 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:31:39.275298    8466 start.go:159] libmachine.API.Create for "multinode-926000" (driver="qemu2")
	I0419 12:31:39.275329    8466 client.go:168] LocalClient.Create starting
	I0419 12:31:39.275398    8466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:31:39.275425    8466 main.go:141] libmachine: Decoding PEM data...
	I0419 12:31:39.275434    8466 main.go:141] libmachine: Parsing certificate...
	I0419 12:31:39.275471    8466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:31:39.275493    8466 main.go:141] libmachine: Decoding PEM data...
	I0419 12:31:39.275501    8466 main.go:141] libmachine: Parsing certificate...
	I0419 12:31:39.275967    8466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:31:39.396383    8466 main.go:141] libmachine: Creating SSH key...
	I0419 12:31:39.665033    8466 main.go:141] libmachine: Creating Disk image...
	I0419 12:31:39.665042    8466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:31:39.665280    8466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:31:39.678696    8466 main.go:141] libmachine: STDOUT: 
	I0419 12:31:39.678722    8466 main.go:141] libmachine: STDERR: 
	I0419 12:31:39.678788    8466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2 +20000M
	I0419 12:31:39.689873    8466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:31:39.689889    8466 main.go:141] libmachine: STDERR: 
	I0419 12:31:39.689909    8466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:31:39.689915    8466 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:31:39.689945    8466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:bb:f6:b9:49:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:31:39.691757    8466 main.go:141] libmachine: STDOUT: 
	I0419 12:31:39.691772    8466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:31:39.691794    8466 client.go:171] duration metric: took 416.468792ms to LocalClient.Create
	I0419 12:31:41.693934    8466 start.go:128] duration metric: took 2.444337333s to createHost
	I0419 12:31:41.694001    8466 start.go:83] releasing machines lock for "multinode-926000", held for 2.444464125s
	W0419 12:31:41.694052    8466 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:31:41.707287    8466 out.go:177] * Deleting "multinode-926000" in qemu2 ...
	W0419 12:31:41.730060    8466 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:31:41.730086    8466 start.go:728] Will try again in 5 seconds ...
	I0419 12:31:46.732181    8466 start.go:360] acquireMachinesLock for multinode-926000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:31:46.732592    8466 start.go:364] duration metric: took 341µs to acquireMachinesLock for "multinode-926000"
	I0419 12:31:46.732716    8466 start.go:93] Provisioning new machine with config: &{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:31:46.733073    8466 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:31:46.739494    8466 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:31:46.789022    8466 start.go:159] libmachine.API.Create for "multinode-926000" (driver="qemu2")
	I0419 12:31:46.789065    8466 client.go:168] LocalClient.Create starting
	I0419 12:31:46.789174    8466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:31:46.789255    8466 main.go:141] libmachine: Decoding PEM data...
	I0419 12:31:46.789270    8466 main.go:141] libmachine: Parsing certificate...
	I0419 12:31:46.789344    8466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:31:46.789388    8466 main.go:141] libmachine: Decoding PEM data...
	I0419 12:31:46.789404    8466 main.go:141] libmachine: Parsing certificate...
	I0419 12:31:46.790010    8466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:31:46.923656    8466 main.go:141] libmachine: Creating SSH key...
	I0419 12:31:46.987333    8466 main.go:141] libmachine: Creating Disk image...
	I0419 12:31:46.987339    8466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:31:46.987509    8466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:31:47.000093    8466 main.go:141] libmachine: STDOUT: 
	I0419 12:31:47.000115    8466 main.go:141] libmachine: STDERR: 
	I0419 12:31:47.000173    8466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2 +20000M
	I0419 12:31:47.010926    8466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:31:47.010945    8466 main.go:141] libmachine: STDERR: 
	I0419 12:31:47.010965    8466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:31:47.010971    8466 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:31:47.011002    8466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:01:35:69:1d:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:31:47.012661    8466 main.go:141] libmachine: STDOUT: 
	I0419 12:31:47.012682    8466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:31:47.012695    8466 client.go:171] duration metric: took 223.628333ms to LocalClient.Create
	I0419 12:31:49.014881    8466 start.go:128] duration metric: took 2.281765375s to createHost
	I0419 12:31:49.014932    8466 start.go:83] releasing machines lock for "multinode-926000", held for 2.282364541s
	W0419 12:31:49.015291    8466 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:31:49.023013    8466 out.go:177] 
	W0419 12:31:49.028015    8466 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:31:49.028046    8466 out.go:239] * 
	* 
	W0419 12:31:49.030698    8466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:31:49.040901    8466 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-926000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (69.077791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.398333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-926000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- rollout status deployment/busybox: exit status 1 (58.713458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.751ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.924375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.49825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.027208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.671417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.030791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.183ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.600917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.936125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.355542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.043166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.585666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.64425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.143ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.724583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.034459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-926000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.553208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.311625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-926000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-926000 -v 3 --alsologtostderr: exit status 83 (44.966958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-926000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-926000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:23.201054    8560 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:23.201214    8560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.201217    8560 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:23.201219    8560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.201337    8560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:23.201554    8560 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:23.201727    8560 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:23.206643    8560 out.go:177] * The control-plane node multinode-926000 host is not running: state=Stopped
	I0419 12:33:23.210622    8560 out.go:177]   To start a cluster, run: "minikube start -p multinode-926000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-926000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.5315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-926000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-926000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.140583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-926000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-926000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-926000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.441833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-926000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-926000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-926000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-926000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.127792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status --output json --alsologtostderr: exit status 7 (32.131291ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-926000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:23.442684    8573 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:23.442841    8573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.442844    8573 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:23.442847    8573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.442973    8573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:23.443095    8573 out.go:298] Setting JSON to true
	I0419 12:33:23.443111    8573 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:23.443170    8573 notify.go:220] Checking for updates...
	I0419 12:33:23.443330    8573 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:23.443336    8573 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:23.443532    8573 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:23.443535    8573 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:23.443537    8573 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-926000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.179417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 node stop m03: exit status 85 (49.502625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-926000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status: exit status 7 (32.051208ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr: exit status 7 (31.852416ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:23.589128    8581 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:23.589278    8581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.589282    8581 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:23.589284    8581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.589398    8581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:23.589524    8581 out.go:298] Setting JSON to false
	I0419 12:33:23.589535    8581 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:23.589595    8581 notify.go:220] Checking for updates...
	I0419 12:33:23.589741    8581 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:23.589746    8581 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:23.589943    8581 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:23.589947    8581 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:23.589949    8581 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr": multinode-926000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (31.974666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.537416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:23.653514    8585 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:23.654028    8585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.654032    8585 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:23.654035    8585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.654164    8585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:23.654396    8585 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:23.654583    8585 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:23.658772    8585 out.go:177] 
	W0419 12:33:23.661839    8585 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0419 12:33:23.661845    8585 out.go:239] * 
	* 
	W0419 12:33:23.663787    8585 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:33:23.667748    8585 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0419 12:33:23.653514    8585 out.go:291] Setting OutFile to fd 1 ...
I0419 12:33:23.654028    8585 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:33:23.654032    8585 out.go:304] Setting ErrFile to fd 2...
I0419 12:33:23.654035    8585 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0419 12:33:23.654164    8585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
I0419 12:33:23.654396    8585 mustload.go:65] Loading cluster: multinode-926000
I0419 12:33:23.654583    8585 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0419 12:33:23.658772    8585 out.go:177] 
W0419 12:33:23.661839    8585 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0419 12:33:23.661845    8585 out.go:239] * 
* 
W0419 12:33:23.663787    8585 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0419 12:33:23.667748    8585 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-926000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (32.142792ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:23.703203    8587 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:23.703359    8587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.703362    8587 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:23.703364    8587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:23.703499    8587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:23.703617    8587 out.go:298] Setting JSON to false
	I0419 12:33:23.703628    8587 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:23.703689    8587 notify.go:220] Checking for updates...
	I0419 12:33:23.703828    8587 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:23.703833    8587 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:23.704034    8587 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:23.704038    8587 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:23.704040    8587 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (76.109583ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:24.580313    8589 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:24.580507    8589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:24.580511    8589 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:24.580514    8589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:24.580678    8589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:24.580831    8589 out.go:298] Setting JSON to false
	I0419 12:33:24.580845    8589 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:24.580886    8589 notify.go:220] Checking for updates...
	I0419 12:33:24.581122    8589 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:24.581129    8589 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:24.581395    8589 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:24.581399    8589 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:24.581402    8589 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (77.5875ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:25.528130    8591 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:25.528311    8591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:25.528315    8591 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:25.528317    8591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:25.528484    8591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:25.528659    8591 out.go:298] Setting JSON to false
	I0419 12:33:25.528673    8591 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:25.528711    8591 notify.go:220] Checking for updates...
	I0419 12:33:25.528933    8591 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:25.528940    8591 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:25.529198    8591 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:25.529204    8591 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:25.529210    8591 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (75.933708ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:26.849466    8594 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:26.849662    8594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:26.849666    8594 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:26.849668    8594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:26.849808    8594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:26.849986    8594 out.go:298] Setting JSON to false
	I0419 12:33:26.849999    8594 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:26.850059    8594 notify.go:220] Checking for updates...
	I0419 12:33:26.850218    8594 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:26.850224    8594 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:26.850470    8594 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:26.850474    8594 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:26.850477    8594 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (77.633ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:31.431464    8597 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:31.431694    8597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:31.431699    8597 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:31.431702    8597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:31.431857    8597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:31.432006    8597 out.go:298] Setting JSON to false
	I0419 12:33:31.432020    8597 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:31.432059    8597 notify.go:220] Checking for updates...
	I0419 12:33:31.432288    8597 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:31.432294    8597 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:31.432596    8597 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:31.432601    8597 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:31.432606    8597 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (77.251958ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:37.430507    8599 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:37.430692    8599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:37.430697    8599 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:37.430700    8599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:37.430861    8599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:37.431024    8599 out.go:298] Setting JSON to false
	I0419 12:33:37.431038    8599 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:37.431059    8599 notify.go:220] Checking for updates...
	I0419 12:33:37.431349    8599 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:37.431356    8599 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:37.431614    8599 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:37.431619    8599 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:37.431622    8599 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (75.913584ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:41.892871    8604 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:41.893063    8604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:41.893067    8604 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:41.893070    8604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:41.893245    8604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:41.893403    8604 out.go:298] Setting JSON to false
	I0419 12:33:41.893417    8604 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:41.893450    8604 notify.go:220] Checking for updates...
	I0419 12:33:41.893682    8604 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:41.893693    8604 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:41.893958    8604 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:41.893963    8604 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:41.893966    8604 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (75.899791ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:33:52.733530    8609 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:33:52.733692    8609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:52.733696    8609 out.go:304] Setting ErrFile to fd 2...
	I0419 12:33:52.733710    8609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:33:52.733881    8609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:33:52.734049    8609 out.go:298] Setting JSON to false
	I0419 12:33:52.734064    8609 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:33:52.734106    8609 notify.go:220] Checking for updates...
	I0419 12:33:52.734305    8609 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:33:52.734312    8609 status.go:255] checking status of multinode-926000 ...
	I0419 12:33:52.734566    8609 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:33:52.734571    8609 status.go:343] host is not running, skipping remaining checks
	I0419 12:33:52.734574    8609 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr: exit status 7 (76.023125ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:34:14.396510    8611 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:34:14.396681    8611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:14.396685    8611 out.go:304] Setting ErrFile to fd 2...
	I0419 12:34:14.396688    8611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:14.396843    8611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:34:14.396990    8611 out.go:298] Setting JSON to false
	I0419 12:34:14.397005    8611 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:34:14.397046    8611 notify.go:220] Checking for updates...
	I0419 12:34:14.397274    8611 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:34:14.397282    8611 status.go:255] checking status of multinode-926000 ...
	I0419 12:34:14.397561    8611 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:34:14.397566    8611 status.go:343] host is not running, skipping remaining checks
	I0419 12:34:14.397569    8611 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-926000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (34.325375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (50.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-926000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-926000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-926000: (3.741022083s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-926000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-926000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.218417541s)

                                                
                                                
-- stdout --
	* [multinode-926000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-926000" primary control-plane node in "multinode-926000" cluster
	* Restarting existing qemu2 VM for "multinode-926000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-926000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:34:18.268977    8635 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:34:18.269126    8635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:18.269130    8635 out.go:304] Setting ErrFile to fd 2...
	I0419 12:34:18.269133    8635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:18.269299    8635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:34:18.270378    8635 out.go:298] Setting JSON to false
	I0419 12:34:18.289125    8635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5629,"bootTime":1713549629,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:34:18.289200    8635 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:34:18.294425    8635 out.go:177] * [multinode-926000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:34:18.300383    8635 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:34:18.304282    8635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:34:18.300428    8635 notify.go:220] Checking for updates...
	I0419 12:34:18.310376    8635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:34:18.313300    8635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:34:18.316384    8635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:34:18.319341    8635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:34:18.322611    8635 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:34:18.322681    8635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:34:18.327351    8635 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:34:18.334327    8635 start.go:297] selected driver: qemu2
	I0419 12:34:18.334335    8635 start.go:901] validating driver "qemu2" against &{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:34:18.334399    8635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:34:18.336856    8635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:34:18.336910    8635 cni.go:84] Creating CNI manager for ""
	I0419 12:34:18.336916    8635 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 12:34:18.336959    8635 start.go:340] cluster config:
	{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:34:18.341338    8635 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:18.347273    8635 out.go:177] * Starting "multinode-926000" primary control-plane node in "multinode-926000" cluster
	I0419 12:34:18.351367    8635 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:34:18.351383    8635 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:34:18.351395    8635 cache.go:56] Caching tarball of preloaded images
	I0419 12:34:18.351466    8635 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:34:18.351472    8635 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:34:18.351566    8635 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/multinode-926000/config.json ...
	I0419 12:34:18.352042    8635 start.go:360] acquireMachinesLock for multinode-926000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:34:18.352076    8635 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "multinode-926000"
	I0419 12:34:18.352088    8635 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:34:18.352094    8635 fix.go:54] fixHost starting: 
	I0419 12:34:18.352209    8635 fix.go:112] recreateIfNeeded on multinode-926000: state=Stopped err=<nil>
	W0419 12:34:18.352220    8635 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:34:18.360349    8635 out.go:177] * Restarting existing qemu2 VM for "multinode-926000" ...
	I0419 12:34:18.364190    8635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:01:35:69:1d:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:34:18.366363    8635 main.go:141] libmachine: STDOUT: 
	I0419 12:34:18.366385    8635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:34:18.366412    8635 fix.go:56] duration metric: took 14.31725ms for fixHost
	I0419 12:34:18.366417    8635 start.go:83] releasing machines lock for "multinode-926000", held for 14.335625ms
	W0419 12:34:18.366424    8635 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:34:18.366464    8635 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:34:18.366469    8635 start.go:728] Will try again in 5 seconds ...
	I0419 12:34:23.368570    8635 start.go:360] acquireMachinesLock for multinode-926000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:34:23.368994    8635 start.go:364] duration metric: took 315.167µs to acquireMachinesLock for "multinode-926000"
	I0419 12:34:23.369166    8635 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:34:23.369187    8635 fix.go:54] fixHost starting: 
	I0419 12:34:23.369928    8635 fix.go:112] recreateIfNeeded on multinode-926000: state=Stopped err=<nil>
	W0419 12:34:23.369959    8635 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:34:23.377569    8635 out.go:177] * Restarting existing qemu2 VM for "multinode-926000" ...
	I0419 12:34:23.381706    8635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:01:35:69:1d:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:34:23.390392    8635 main.go:141] libmachine: STDOUT: 
	I0419 12:34:23.390456    8635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:34:23.390523    8635 fix.go:56] duration metric: took 21.33425ms for fixHost
	I0419 12:34:23.390543    8635 start.go:83] releasing machines lock for "multinode-926000", held for 21.502084ms
	W0419 12:34:23.390739    8635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-926000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-926000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:34:23.397590    8635 out.go:177] 
	W0419 12:34:23.401652    8635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:34:23.401688    8635 out.go:239] * 
	* 
	W0419 12:34:23.404386    8635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:34:23.411535    8635 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-926000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-926000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (35.139042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 node delete m03: exit status 83 (41.163417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-926000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-926000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-926000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr: exit status 7 (32.402166ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:34:23.604790    8649 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:34:23.604945    8649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:23.604950    8649 out.go:304] Setting ErrFile to fd 2...
	I0419 12:34:23.604953    8649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:23.605079    8649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:34:23.605197    8649 out.go:298] Setting JSON to false
	I0419 12:34:23.605208    8649 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:34:23.605268    8649 notify.go:220] Checking for updates...
	I0419 12:34:23.605402    8649 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:34:23.605408    8649 status.go:255] checking status of multinode-926000 ...
	I0419 12:34:23.605595    8649 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:34:23.605600    8649 status.go:343] host is not running, skipping remaining checks
	I0419 12:34:23.605602    8649 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.117583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-926000 stop: (3.11057625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status: exit status 7 (68.675167ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr: exit status 7 (34.303042ms)

                                                
                                                
-- stdout --
	multinode-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:34:26.851039    8673 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:34:26.851190    8673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:26.851193    8673 out.go:304] Setting ErrFile to fd 2...
	I0419 12:34:26.851195    8673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:26.851320    8673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:34:26.851449    8673 out.go:298] Setting JSON to false
	I0419 12:34:26.851463    8673 mustload.go:65] Loading cluster: multinode-926000
	I0419 12:34:26.851516    8673 notify.go:220] Checking for updates...
	I0419 12:34:26.851664    8673 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:34:26.851670    8673 status.go:255] checking status of multinode-926000 ...
	I0419 12:34:26.851866    8673 status.go:330] multinode-926000 host status = "Stopped" (err=<nil>)
	I0419 12:34:26.851870    8673 status.go:343] host is not running, skipping remaining checks
	I0419 12:34:26.851872    8673 status.go:257] multinode-926000 status: &{Name:multinode-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr": multinode-926000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-926000 status --alsologtostderr": multinode-926000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.376166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-926000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-926000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.185389791s)

                                                
                                                
-- stdout --
	* [multinode-926000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-926000" primary control-plane node in "multinode-926000" cluster
	* Restarting existing qemu2 VM for "multinode-926000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-926000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:34:26.914734    8677 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:34:26.915126    8677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:26.915131    8677 out.go:304] Setting ErrFile to fd 2...
	I0419 12:34:26.915134    8677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:26.915324    8677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:34:26.916684    8677 out.go:298] Setting JSON to false
	I0419 12:34:26.933046    8677 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5637,"bootTime":1713549629,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:34:26.933112    8677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:34:26.937374    8677 out.go:177] * [multinode-926000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:34:26.940406    8677 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:34:26.944378    8677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:34:26.940470    8677 notify.go:220] Checking for updates...
	I0419 12:34:26.951347    8677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:34:26.954393    8677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:34:26.957374    8677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:34:26.960339    8677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:34:26.963673    8677 config.go:182] Loaded profile config "multinode-926000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:34:26.963977    8677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:34:26.968342    8677 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:34:26.975354    8677 start.go:297] selected driver: qemu2
	I0419 12:34:26.975360    8677 start.go:901] validating driver "qemu2" against &{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:34:26.975410    8677 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:34:26.977643    8677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:34:26.977689    8677 cni.go:84] Creating CNI manager for ""
	I0419 12:34:26.977695    8677 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0419 12:34:26.977755    8677 start.go:340] cluster config:
	{Name:multinode-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-926000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:34:26.981983    8677 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:26.989378    8677 out.go:177] * Starting "multinode-926000" primary control-plane node in "multinode-926000" cluster
	I0419 12:34:26.993236    8677 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:34:26.993252    8677 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:34:26.993261    8677 cache.go:56] Caching tarball of preloaded images
	I0419 12:34:26.993317    8677 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:34:26.993323    8677 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:34:26.993385    8677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/multinode-926000/config.json ...
	I0419 12:34:26.993859    8677 start.go:360] acquireMachinesLock for multinode-926000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:34:26.993886    8677 start.go:364] duration metric: took 21.5µs to acquireMachinesLock for "multinode-926000"
	I0419 12:34:26.993896    8677 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:34:26.993915    8677 fix.go:54] fixHost starting: 
	I0419 12:34:26.994030    8677 fix.go:112] recreateIfNeeded on multinode-926000: state=Stopped err=<nil>
	W0419 12:34:26.994037    8677 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:34:27.002325    8677 out.go:177] * Restarting existing qemu2 VM for "multinode-926000" ...
	I0419 12:34:27.006329    8677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:01:35:69:1d:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:34:27.008389    8677 main.go:141] libmachine: STDOUT: 
	I0419 12:34:27.008409    8677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:34:27.008436    8677 fix.go:56] duration metric: took 14.520708ms for fixHost
	I0419 12:34:27.008441    8677 start.go:83] releasing machines lock for "multinode-926000", held for 14.550416ms
	W0419 12:34:27.008446    8677 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:34:27.008486    8677 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:34:27.008490    8677 start.go:728] Will try again in 5 seconds ...
	I0419 12:34:32.010590    8677 start.go:360] acquireMachinesLock for multinode-926000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:34:32.011004    8677 start.go:364] duration metric: took 327.709µs to acquireMachinesLock for "multinode-926000"
	I0419 12:34:32.011132    8677 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:34:32.011152    8677 fix.go:54] fixHost starting: 
	I0419 12:34:32.011892    8677 fix.go:112] recreateIfNeeded on multinode-926000: state=Stopped err=<nil>
	W0419 12:34:32.011919    8677 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:34:32.021366    8677 out.go:177] * Restarting existing qemu2 VM for "multinode-926000" ...
	I0419 12:34:32.025571    8677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:01:35:69:1d:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/multinode-926000/disk.qcow2
	I0419 12:34:32.034787    8677 main.go:141] libmachine: STDOUT: 
	I0419 12:34:32.034844    8677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:34:32.034924    8677 fix.go:56] duration metric: took 23.774708ms for fixHost
	I0419 12:34:32.034942    8677 start.go:83] releasing machines lock for "multinode-926000", held for 23.917791ms
	W0419 12:34:32.035125    8677 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-926000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-926000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:34:32.042374    8677 out.go:177] 
	W0419 12:34:32.046471    8677 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:34:32.046494    8677 out.go:239] * 
	* 
	W0419 12:34:32.049423    8677 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:34:32.056434    8677 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-926000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (70.625583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-926000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-926000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-926000-m01 --driver=qemu2 : exit status 80 (9.802151125s)

                                                
                                                
-- stdout --
	* [multinode-926000-m01] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-926000-m01" primary control-plane node in "multinode-926000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-926000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-926000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-926000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-926000-m02 --driver=qemu2 : exit status 80 (10.111616292s)

                                                
                                                
-- stdout --
	* [multinode-926000-m02] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-926000-m02" primary control-plane node in "multinode-926000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-926000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-926000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-926000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-926000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-926000: exit status 83 (81.545959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-926000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-926000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-926000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-926000 -n multinode-926000: exit status 7 (32.214833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-926000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.17s)

                                                
                                    
x
+
TestPreload (10.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.924115375s)

                                                
                                                
-- stdout --
	* [test-preload-428000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-428000" primary control-plane node in "test-preload-428000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-428000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:34:52.475087    8736 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:34:52.475264    8736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:52.475267    8736 out.go:304] Setting ErrFile to fd 2...
	I0419 12:34:52.475270    8736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:34:52.475393    8736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:34:52.476397    8736 out.go:298] Setting JSON to false
	I0419 12:34:52.492533    8736 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5663,"bootTime":1713549629,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:34:52.492595    8736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:34:52.497306    8736 out.go:177] * [test-preload-428000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:34:52.504325    8736 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:34:52.508257    8736 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:34:52.504382    8736 notify.go:220] Checking for updates...
	I0419 12:34:52.511245    8736 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:34:52.514272    8736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:34:52.517194    8736 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:34:52.520279    8736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:34:52.523608    8736 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:34:52.523657    8736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:34:52.527169    8736 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:34:52.534240    8736 start.go:297] selected driver: qemu2
	I0419 12:34:52.534246    8736 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:34:52.534252    8736 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:34:52.536538    8736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:34:52.537699    8736 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:34:52.540349    8736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:34:52.540393    8736 cni.go:84] Creating CNI manager for ""
	I0419 12:34:52.540399    8736 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:34:52.540403    8736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:34:52.540432    8736 start.go:340] cluster config:
	{Name:test-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:34:52.544996    8736 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.552260    8736 out.go:177] * Starting "test-preload-428000" primary control-plane node in "test-preload-428000" cluster
	I0419 12:34:52.556260    8736 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0419 12:34:52.556331    8736 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/test-preload-428000/config.json ...
	I0419 12:34:52.556343    8736 cache.go:107] acquiring lock: {Name:mke0d297b5bc4c0575347e0b88640504e7dc748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556358    8736 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/test-preload-428000/config.json: {Name:mk331e26d8fefa68ed84aa4561c6d8409d43b44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:34:52.556368    8736 cache.go:107] acquiring lock: {Name:mkb6af8626a954a235b538f440795293e3404958 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556379    8736 cache.go:107] acquiring lock: {Name:mkacc2782c5b5d03107b449c75980ebcf9fd4811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556387    8736 cache.go:107] acquiring lock: {Name:mk793787609c84c947707270dee020bc616924d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556461    8736 cache.go:107] acquiring lock: {Name:mk222e82763d330dabf19d097083370e6ac740bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556521    8736 cache.go:107] acquiring lock: {Name:mkb28d6ee1579b07c530bf1406e469f0246e8a9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556525    8736 cache.go:107] acquiring lock: {Name:mk8e8a9f6de3719c663428da6ed7c725eefdf73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556688    8736 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0419 12:34:52.556693    8736 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0419 12:34:52.556717    8736 cache.go:107] acquiring lock: {Name:mk7c614b3e4410c84134cd9581b24e4ac38f5ca7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:34:52.556768    8736 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0419 12:34:52.556781    8736 start.go:360] acquireMachinesLock for test-preload-428000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:34:52.556899    8736 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0419 12:34:52.556893    8736 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:34:52.556904    8736 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:34:52.556798    8736 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0419 12:34:52.556974    8736 start.go:364] duration metric: took 84.042µs to acquireMachinesLock for "test-preload-428000"
	I0419 12:34:52.557009    8736 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:34:52.557022    8736 start.go:93] Provisioning new machine with config: &{Name:test-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:34:52.557063    8736 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:34:52.564280    8736 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:34:52.568194    8736 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0419 12:34:52.569253    8736 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:34:52.569527    8736 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0419 12:34:52.574106    8736 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:34:52.574108    8736 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0419 12:34:52.574144    8736 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:34:52.574205    8736 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0419 12:34:52.574255    8736 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0419 12:34:52.581722    8736 start.go:159] libmachine.API.Create for "test-preload-428000" (driver="qemu2")
	I0419 12:34:52.581739    8736 client.go:168] LocalClient.Create starting
	I0419 12:34:52.581804    8736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:34:52.581832    8736 main.go:141] libmachine: Decoding PEM data...
	I0419 12:34:52.581841    8736 main.go:141] libmachine: Parsing certificate...
	I0419 12:34:52.581878    8736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:34:52.581900    8736 main.go:141] libmachine: Decoding PEM data...
	I0419 12:34:52.581923    8736 main.go:141] libmachine: Parsing certificate...
	I0419 12:34:52.582194    8736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:34:52.719635    8736 main.go:141] libmachine: Creating SSH key...
	I0419 12:34:52.825305    8736 main.go:141] libmachine: Creating Disk image...
	I0419 12:34:52.825325    8736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:34:52.825495    8736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2
	I0419 12:34:52.838975    8736 main.go:141] libmachine: STDOUT: 
	I0419 12:34:52.838996    8736 main.go:141] libmachine: STDERR: 
	I0419 12:34:52.839047    8736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2 +20000M
	I0419 12:34:52.851606    8736 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:34:52.851635    8736 main.go:141] libmachine: STDERR: 
	I0419 12:34:52.851651    8736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2
	I0419 12:34:52.851656    8736 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:34:52.851690    8736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e9:5d:83:94:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2
	I0419 12:34:52.853998    8736 main.go:141] libmachine: STDOUT: 
	I0419 12:34:52.854027    8736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:34:52.854053    8736 client.go:171] duration metric: took 272.314708ms to LocalClient.Create
	I0419 12:34:52.994704    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0419 12:34:53.000081    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0419 12:34:53.024299    8736 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0419 12:34:53.024326    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0419 12:34:53.047838    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0419 12:34:53.060649    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0419 12:34:53.109482    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0419 12:34:53.115492    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0419 12:34:53.122529    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0419 12:34:53.122554    8736 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 566.190708ms
	I0419 12:34:53.122569    8736 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0419 12:34:53.208311    8736 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0419 12:34:53.208394    8736 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0419 12:34:53.850728    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0419 12:34:53.850811    8736 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.294495916s
	I0419 12:34:53.850878    8736 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0419 12:34:54.854311    8736 start.go:128] duration metric: took 2.297278042s to createHost
	I0419 12:34:54.854360    8736 start.go:83] releasing machines lock for "test-preload-428000", held for 2.297421458s
	W0419 12:34:54.854414    8736 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:34:54.866524    8736 out.go:177] * Deleting "test-preload-428000" in qemu2 ...
	W0419 12:34:54.885882    8736 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:34:54.885917    8736 start.go:728] Will try again in 5 seconds ...
	I0419 12:34:55.158824    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0419 12:34:55.158869    8736 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.6024015s
	I0419 12:34:55.158935    8736 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0419 12:34:55.509443    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0419 12:34:55.509489    8736 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.95310175s
	I0419 12:34:55.509512    8736 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0419 12:34:56.325431    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0419 12:34:56.325486    8736 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.769204166s
	I0419 12:34:56.325509    8736 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0419 12:34:57.358170    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0419 12:34:57.358215    8736 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.8019595s
	I0419 12:34:57.358238    8736 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0419 12:34:58.992658    8736 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0419 12:34:58.992702    8736 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.436123708s
	I0419 12:34:58.992729    8736 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0419 12:34:59.886301    8736 start.go:360] acquireMachinesLock for test-preload-428000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:34:59.886699    8736 start.go:364] duration metric: took 330.042µs to acquireMachinesLock for "test-preload-428000"
	I0419 12:34:59.886822    8736 start.go:93] Provisioning new machine with config: &{Name:test-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:34:59.887030    8736 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:34:59.898641    8736 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:34:59.948689    8736 start.go:159] libmachine.API.Create for "test-preload-428000" (driver="qemu2")
	I0419 12:34:59.948729    8736 client.go:168] LocalClient.Create starting
	I0419 12:34:59.948842    8736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:34:59.948902    8736 main.go:141] libmachine: Decoding PEM data...
	I0419 12:34:59.948920    8736 main.go:141] libmachine: Parsing certificate...
	I0419 12:34:59.948974    8736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:34:59.949018    8736 main.go:141] libmachine: Decoding PEM data...
	I0419 12:34:59.949032    8736 main.go:141] libmachine: Parsing certificate...
	I0419 12:34:59.949523    8736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:35:00.083716    8736 main.go:141] libmachine: Creating SSH key...
	I0419 12:35:00.291489    8736 main.go:141] libmachine: Creating Disk image...
	I0419 12:35:00.291498    8736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:35:00.291705    8736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2
	I0419 12:35:00.304669    8736 main.go:141] libmachine: STDOUT: 
	I0419 12:35:00.304697    8736 main.go:141] libmachine: STDERR: 
	I0419 12:35:00.304761    8736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2 +20000M
	I0419 12:35:00.315975    8736 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:35:00.315994    8736 main.go:141] libmachine: STDERR: 
	I0419 12:35:00.316007    8736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2
	I0419 12:35:00.316011    8736 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:35:00.316055    8736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:d0:b1:6b:7b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/test-preload-428000/disk.qcow2
	I0419 12:35:00.317773    8736 main.go:141] libmachine: STDOUT: 
	I0419 12:35:00.317791    8736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:35:00.317803    8736 client.go:171] duration metric: took 369.074958ms to LocalClient.Create
	I0419 12:35:02.317943    8736 start.go:128] duration metric: took 2.430911959s to createHost
	I0419 12:35:02.318032    8736 start.go:83] releasing machines lock for "test-preload-428000", held for 2.431359666s
	W0419 12:35:02.318265    8736 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-428000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-428000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:35:02.335568    8736 out.go:177] 
	W0419 12:35:02.340611    8736 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:35:02.340646    8736 out.go:239] * 
	* 
	W0419 12:35:02.343477    8736 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:35:02.351488    8736 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-04-19 12:35:02.371427 -0700 PDT m=+711.932488626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-428000 -n test-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-428000 -n test-preload-428000: exit status 7 (70.796334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-428000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-428000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-428000
--- FAIL: TestPreload (10.10s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-214000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-214000 --memory=2048 --driver=qemu2 : exit status 80 (9.8712705s)

                                                
                                                
-- stdout --
	* [scheduled-stop-214000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-214000" primary control-plane node in "scheduled-stop-214000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-214000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-214000" primary control-plane node in "scheduled-stop-214000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-19 12:35:12.41297 -0700 PDT m=+721.974256667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-214000 -n scheduled-stop-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-214000 -n scheduled-stop-214000: exit status 7 (70.0235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-214000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-214000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-214000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (11.98s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3747160192 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-210000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-210000 --memory=2600 --driver=qemu2 : exit status 80 (9.702126125s)

                                                
                                                
-- stdout --
	* [skaffold-210000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-210000" primary control-plane node in "skaffold-210000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-210000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-210000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-210000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-210000" primary control-plane node in "skaffold-210000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-210000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-210000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-04-19 12:35:24.395359 -0700 PDT m=+733.956913876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-210000 -n skaffold-210000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-210000 -n skaffold-210000: exit status 7 (65.350958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-210000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-210000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-210000
--- FAIL: TestSkaffold (11.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (583.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2429484563 start -p running-upgrade-311000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2429484563 start -p running-upgrade-311000 --memory=2200 --vm-driver=qemu2 : (47.960303125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-311000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-311000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.253094667s)

                                                
                                                
-- stdout --
	* [running-upgrade-311000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-311000" primary control-plane node in "running-upgrade-311000" cluster
	* Updating the running qemu2 "running-upgrade-311000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:36:54.118922    9133 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:36:54.119062    9133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:36:54.119065    9133 out.go:304] Setting ErrFile to fd 2...
	I0419 12:36:54.119067    9133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:36:54.119182    9133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:36:54.120116    9133 out.go:298] Setting JSON to false
	I0419 12:36:54.137676    9133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5785,"bootTime":1713549629,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:36:54.137761    9133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:36:54.142285    9133 out.go:177] * [running-upgrade-311000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:36:54.150238    9133 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:36:54.150279    9133 notify.go:220] Checking for updates...
	I0419 12:36:54.157152    9133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:36:54.161263    9133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:36:54.164229    9133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:36:54.167127    9133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:36:54.170208    9133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:36:54.173511    9133 config.go:182] Loaded profile config "running-upgrade-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:36:54.175044    9133 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0419 12:36:54.178206    9133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:36:54.182185    9133 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:36:54.187194    9133 start.go:297] selected driver: qemu2
	I0419 12:36:54.187206    9133 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51218 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:36:54.187258    9133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:36:54.189856    9133 cni.go:84] Creating CNI manager for ""
	I0419 12:36:54.189874    9133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:36:54.189895    9133 start.go:340] cluster config:
	{Name:running-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51218 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:36:54.189963    9133 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:36:54.204724    9133 out.go:177] * Starting "running-upgrade-311000" primary control-plane node in "running-upgrade-311000" cluster
	I0419 12:36:54.208201    9133 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0419 12:36:54.208215    9133 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0419 12:36:54.208219    9133 cache.go:56] Caching tarball of preloaded images
	I0419 12:36:54.208272    9133 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:36:54.208278    9133 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0419 12:36:54.208328    9133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/config.json ...
	I0419 12:36:54.208803    9133 start.go:360] acquireMachinesLock for running-upgrade-311000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:36:54.208836    9133 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "running-upgrade-311000"
	I0419 12:36:54.208846    9133 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:36:54.208853    9133 fix.go:54] fixHost starting: 
	I0419 12:36:54.209571    9133 fix.go:112] recreateIfNeeded on running-upgrade-311000: state=Running err=<nil>
	W0419 12:36:54.209581    9133 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:36:54.218179    9133 out.go:177] * Updating the running qemu2 "running-upgrade-311000" VM ...
	I0419 12:36:54.222209    9133 machine.go:94] provisionDockerMachine start ...
	I0419 12:36:54.222244    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.222352    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.222357    9133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 12:36:54.283905    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-311000
	
	I0419 12:36:54.283922    9133 buildroot.go:166] provisioning hostname "running-upgrade-311000"
	I0419 12:36:54.283967    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.284097    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.284104    9133 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-311000 && echo "running-upgrade-311000" | sudo tee /etc/hostname
	I0419 12:36:54.346496    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-311000
	
	I0419 12:36:54.346538    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.346633    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.346641    9133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-311000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-311000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-311000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 12:36:54.402459    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 12:36:54.402468    9133 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18669-6895/.minikube CaCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18669-6895/.minikube}
	I0419 12:36:54.402475    9133 buildroot.go:174] setting up certificates
	I0419 12:36:54.402485    9133 provision.go:84] configureAuth start
	I0419 12:36:54.402488    9133 provision.go:143] copyHostCerts
	I0419 12:36:54.402563    9133 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem, removing ...
	I0419 12:36:54.402568    9133 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem
	I0419 12:36:54.402704    9133 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem (1078 bytes)
	I0419 12:36:54.402895    9133 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem, removing ...
	I0419 12:36:54.402899    9133 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem
	I0419 12:36:54.402977    9133 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem (1123 bytes)
	I0419 12:36:54.403097    9133 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem, removing ...
	I0419 12:36:54.403100    9133 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem
	I0419 12:36:54.403144    9133 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem (1679 bytes)
	I0419 12:36:54.403223    9133 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-311000 san=[127.0.0.1 localhost minikube running-upgrade-311000]
	I0419 12:36:54.643751    9133 provision.go:177] copyRemoteCerts
	I0419 12:36:54.643799    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 12:36:54.643815    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:36:54.673985    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 12:36:54.682788    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 12:36:54.690007    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0419 12:36:54.696726    9133 provision.go:87] duration metric: took 294.241458ms to configureAuth
	I0419 12:36:54.696735    9133 buildroot.go:189] setting minikube options for container-runtime
	I0419 12:36:54.696834    9133 config.go:182] Loaded profile config "running-upgrade-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:36:54.696865    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.696982    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.696986    9133 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 12:36:54.753196    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 12:36:54.753208    9133 buildroot.go:70] root file system type: tmpfs
	I0419 12:36:54.753261    9133 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 12:36:54.753315    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.753422    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.753456    9133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 12:36:54.810236    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 12:36:54.810282    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.810390    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.810398    9133 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 12:36:54.867277    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 12:36:54.867290    9133 machine.go:97] duration metric: took 645.089584ms to provisionDockerMachine
	I0419 12:36:54.867295    9133 start.go:293] postStartSetup for "running-upgrade-311000" (driver="qemu2")
	I0419 12:36:54.867301    9133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 12:36:54.867350    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 12:36:54.867364    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:36:54.899084    9133 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 12:36:54.900466    9133 info.go:137] Remote host: Buildroot 2021.02.12
	I0419 12:36:54.900472    9133 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18669-6895/.minikube/addons for local assets ...
	I0419 12:36:54.900546    9133 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18669-6895/.minikube/files for local assets ...
	I0419 12:36:54.900674    9133 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem -> 73042.pem in /etc/ssl/certs
	I0419 12:36:54.900815    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 12:36:54.903712    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem --> /etc/ssl/certs/73042.pem (1708 bytes)
	I0419 12:36:54.910639    9133 start.go:296] duration metric: took 43.339375ms for postStartSetup
	I0419 12:36:54.910652    9133 fix.go:56] duration metric: took 701.817958ms for fixHost
	I0419 12:36:54.910684    9133 main.go:141] libmachine: Using SSH client type: native
	I0419 12:36:54.910780    9133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10502dc80] 0x1050304e0 <nil>  [] 0s} localhost 51186 <nil> <nil>}
	I0419 12:36:54.910784    9133 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 12:36:54.965936    9133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713555415.417077304
	
	I0419 12:36:54.965943    9133 fix.go:216] guest clock: 1713555415.417077304
	I0419 12:36:54.965947    9133 fix.go:229] Guest: 2024-04-19 12:36:55.417077304 -0700 PDT Remote: 2024-04-19 12:36:54.910653 -0700 PDT m=+0.814744334 (delta=506.424304ms)
	I0419 12:36:54.965965    9133 fix.go:200] guest clock delta is within tolerance: 506.424304ms
	I0419 12:36:54.965969    9133 start.go:83] releasing machines lock for "running-upgrade-311000", held for 757.145333ms
	I0419 12:36:54.966041    9133 ssh_runner.go:195] Run: cat /version.json
	I0419 12:36:54.966050    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:36:54.966041    9133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 12:36:54.966078    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	W0419 12:36:54.966710    9133 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51186: connect: connection refused
	I0419 12:36:54.966734    9133 retry.go:31] will retry after 135.804896ms: dial tcp [::1]:51186: connect: connection refused
	W0419 12:36:54.996429    9133 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0419 12:36:54.996491    9133 ssh_runner.go:195] Run: systemctl --version
	I0419 12:36:54.998383    9133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 12:36:54.999994    9133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 12:36:55.000020    9133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0419 12:36:55.003056    9133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0419 12:36:55.007512    9133 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 12:36:55.007519    9133 start.go:494] detecting cgroup driver to use...
	I0419 12:36:55.007624    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 12:36:55.013050    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0419 12:36:55.015922    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 12:36:55.019332    9133 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 12:36:55.019354    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 12:36:55.022631    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 12:36:55.025648    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 12:36:55.028619    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 12:36:55.032047    9133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 12:36:55.036341    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 12:36:55.039410    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 12:36:55.042835    9133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 12:36:55.046055    9133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 12:36:55.048505    9133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 12:36:55.051166    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:36:55.131231    9133 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 12:36:55.138187    9133 start.go:494] detecting cgroup driver to use...
	I0419 12:36:55.138253    9133 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 12:36:55.146880    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 12:36:55.152103    9133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 12:36:55.196598    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 12:36:55.201698    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 12:36:55.206503    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 12:36:55.211890    9133 ssh_runner.go:195] Run: which cri-dockerd
	I0419 12:36:55.213169    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 12:36:55.215662    9133 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 12:36:55.220603    9133 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 12:36:55.293526    9133 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 12:36:55.366616    9133 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 12:36:55.366669    9133 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 12:36:55.372585    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:36:55.446240    9133 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 12:36:56.909959    9133 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.463735583s)
	I0419 12:36:56.910022    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 12:36:56.914916    9133 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0419 12:36:56.921234    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 12:36:56.925708    9133 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 12:36:57.021127    9133 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 12:36:57.090780    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:36:57.152430    9133 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 12:36:57.158316    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 12:36:57.162808    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:36:57.246869    9133 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 12:36:57.287059    9133 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 12:36:57.287144    9133 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 12:36:57.289252    9133 start.go:562] Will wait 60s for crictl version
	I0419 12:36:57.289297    9133 ssh_runner.go:195] Run: which crictl
	I0419 12:36:57.290904    9133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 12:36:57.303136    9133 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0419 12:36:57.303203    9133 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 12:36:57.316582    9133 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 12:36:57.341596    9133 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0419 12:36:57.341666    9133 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0419 12:36:57.343022    9133 kubeadm.go:877] updating cluster {Name:running-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51218 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0419 12:36:57.343068    9133 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0419 12:36:57.343103    9133 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 12:36:57.354053    9133 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 12:36:57.354066    9133 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0419 12:36:57.354119    9133 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 12:36:57.357064    9133 ssh_runner.go:195] Run: which lz4
	I0419 12:36:57.358255    9133 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0419 12:36:57.359441    9133 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 12:36:57.359450    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0419 12:36:58.109436    9133 docker.go:649] duration metric: took 751.225875ms to copy over tarball
	I0419 12:36:58.109491    9133 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 12:36:59.348020    9133 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.238543083s)
	I0419 12:36:59.348036    9133 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 12:36:59.363620    9133 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 12:36:59.366893    9133 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0419 12:36:59.372046    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:36:59.433749    9133 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 12:37:00.629144    9133 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.195404625s)
	I0419 12:37:00.629241    9133 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 12:37:00.642904    9133 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 12:37:00.642912    9133 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0419 12:37:00.642918    9133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0419 12:37:00.649092    9133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:37:00.649092    9133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:37:00.649131    9133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:37:00.649171    9133 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0419 12:37:00.649200    9133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:37:00.649252    9133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:37:00.649313    9133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:37:00.649353    9133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:37:00.658376    9133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:37:00.658440    9133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:37:00.658517    9133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:37:00.658653    9133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:37:00.659183    9133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:37:00.659289    9133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:37:00.659321    9133 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0419 12:37:00.659345    9133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:37:01.058914    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:37:01.072794    9133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0419 12:37:01.072832    9133 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:37:01.072879    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:37:01.083598    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0419 12:37:01.087005    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:37:01.087483    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0419 12:37:01.100120    9133 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0419 12:37:01.100243    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:37:01.100374    9133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0419 12:37:01.100390    9133 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:37:01.100404    9133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0419 12:37:01.100411    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:37:01.100415    9133 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:37:01.100436    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:37:01.110770    9133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0419 12:37:01.110793    9133 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:37:01.110843    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:37:01.118045    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0419 12:37:01.118464    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0419 12:37:01.120309    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0419 12:37:01.123972    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:37:01.129604    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0419 12:37:01.129723    9133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0419 12:37:01.137861    9133 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0419 12:37:01.137883    9133 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0419 12:37:01.137935    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0419 12:37:01.138311    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0419 12:37:01.143342    9133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0419 12:37:01.143354    9133 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0419 12:37:01.143361    9133 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:37:01.143373    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0419 12:37:01.143395    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:37:01.165917    9133 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0419 12:37:01.165939    9133 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:37:01.165996    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0419 12:37:01.166015    9133 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0419 12:37:01.166100    9133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0419 12:37:01.198605    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0419 12:37:01.198660    9133 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0419 12:37:01.198674    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0419 12:37:01.198682    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0419 12:37:01.200524    9133 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0419 12:37:01.200532    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0419 12:37:01.242732    9133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0419 12:37:01.242753    9133 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0419 12:37:01.242767    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0419 12:37:01.270662    9133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0419 12:37:01.544127    9133 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0419 12:37:01.544709    9133 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:37:01.584629    9133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0419 12:37:01.584673    9133 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:37:01.584781    9133 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:37:03.087948    9133 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.503161s)
	I0419 12:37:03.087984    9133 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0419 12:37:03.088400    9133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0419 12:37:03.094277    9133 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0419 12:37:03.094307    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0419 12:37:03.142020    9133 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0419 12:37:03.142039    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0419 12:37:03.378658    9133 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0419 12:37:03.378695    9133 cache_images.go:92] duration metric: took 2.735832916s to LoadCachedImages
	W0419 12:37:03.378726    9133 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0419 12:37:03.378731    9133 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0419 12:37:03.378793    9133 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-311000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 12:37:03.378844    9133 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 12:37:03.393232    9133 cni.go:84] Creating CNI manager for ""
	I0419 12:37:03.393245    9133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:37:03.393250    9133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 12:37:03.393258    9133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-311000 NodeName:running-upgrade-311000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 12:37:03.393333    9133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-311000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 12:37:03.393398    9133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0419 12:37:03.396412    9133 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 12:37:03.396436    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 12:37:03.399442    9133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0419 12:37:03.404606    9133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 12:37:03.409433    9133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0419 12:37:03.415370    9133 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0419 12:37:03.416778    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:37:03.482532    9133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:37:03.488129    9133 certs.go:68] Setting up /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000 for IP: 10.0.2.15
	I0419 12:37:03.488136    9133 certs.go:194] generating shared ca certs ...
	I0419 12:37:03.488144    9133 certs.go:226] acquiring lock for ca certs: {Name:mke38b98dd5558382d381a0a6e0e324ad9664707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:37:03.488376    9133 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.key
	I0419 12:37:03.488429    9133 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.key
	I0419 12:37:03.488434    9133 certs.go:256] generating profile certs ...
	I0419 12:37:03.488510    9133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.key
	I0419 12:37:03.488522    9133 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.key.b36559a7
	I0419 12:37:03.488531    9133 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.crt.b36559a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0419 12:37:03.545527    9133 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.crt.b36559a7 ...
	I0419 12:37:03.545539    9133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.crt.b36559a7: {Name:mk5399c48305141a260f12c4262a42c080783439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:37:03.545820    9133 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.key.b36559a7 ...
	I0419 12:37:03.545826    9133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.key.b36559a7: {Name:mkecfe49265ce42ad37ad1a631e510a21a4d3ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:37:03.545947    9133 certs.go:381] copying /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.crt.b36559a7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.crt
	I0419 12:37:03.546127    9133 certs.go:385] copying /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.key.b36559a7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.key
	I0419 12:37:03.546320    9133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/proxy-client.key
	I0419 12:37:03.546449    9133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304.pem (1338 bytes)
	W0419 12:37:03.546481    9133 certs.go:480] ignoring /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304_empty.pem, impossibly tiny 0 bytes
	I0419 12:37:03.546487    9133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 12:37:03.546516    9133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem (1078 bytes)
	I0419 12:37:03.546541    9133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem (1123 bytes)
	I0419 12:37:03.546564    9133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem (1679 bytes)
	I0419 12:37:03.546613    9133 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem (1708 bytes)
	I0419 12:37:03.546934    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 12:37:03.554290    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0419 12:37:03.561066    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 12:37:03.568192    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0419 12:37:03.575305    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0419 12:37:03.582647    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0419 12:37:03.589339    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 12:37:03.595947    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0419 12:37:03.603229    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304.pem --> /usr/share/ca-certificates/7304.pem (1338 bytes)
	I0419 12:37:03.610634    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem --> /usr/share/ca-certificates/73042.pem (1708 bytes)
	I0419 12:37:03.617707    9133 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 12:37:03.624435    9133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 12:37:03.629283    9133 ssh_runner.go:195] Run: openssl version
	I0419 12:37:03.630899    9133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 12:37:03.633993    9133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:37:03.635316    9133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:37:03.635341    9133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:37:03.637070    9133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 12:37:03.639737    9133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7304.pem && ln -fs /usr/share/ca-certificates/7304.pem /etc/ssl/certs/7304.pem"
	I0419 12:37:03.643273    9133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7304.pem
	I0419 12:37:03.644694    9133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:24 /usr/share/ca-certificates/7304.pem
	I0419 12:37:03.644713    9133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7304.pem
	I0419 12:37:03.646521    9133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7304.pem /etc/ssl/certs/51391683.0"
	I0419 12:37:03.649065    9133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73042.pem && ln -fs /usr/share/ca-certificates/73042.pem /etc/ssl/certs/73042.pem"
	I0419 12:37:03.651916    9133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73042.pem
	I0419 12:37:03.653535    9133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:24 /usr/share/ca-certificates/73042.pem
	I0419 12:37:03.653556    9133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73042.pem
	I0419 12:37:03.655274    9133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73042.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 12:37:03.658438    9133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 12:37:03.659924    9133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 12:37:03.661594    9133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 12:37:03.663328    9133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 12:37:03.665266    9133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 12:37:03.667183    9133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 12:37:03.669065    9133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 12:37:03.670782    9133 kubeadm.go:391] StartCluster: {Name:running-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51218 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:37:03.670847    9133 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 12:37:03.680971    9133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0419 12:37:03.684052    9133 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0419 12:37:03.684058    9133 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0419 12:37:03.684061    9133 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0419 12:37:03.684084    9133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0419 12:37:03.687096    9133 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:37:03.687128    9133 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-311000" does not appear in /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:37:03.687151    9133 kubeconfig.go:62] /Users/jenkins/minikube-integration/18669-6895/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-311000" cluster setting kubeconfig missing "running-upgrade-311000" context setting]
	I0419 12:37:03.687334    9133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:37:03.688228    9133 kapi.go:59] client config for running-upgrade-311000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063bf980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:37:03.689080    9133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0419 12:37:03.691825    9133 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-311000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0419 12:37:03.691830    9133 kubeadm.go:1154] stopping kube-system containers ...
	I0419 12:37:03.691865    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 12:37:03.702574    9133 docker.go:483] Stopping containers: [bbaf072b49f4 a61b8c85e5f1 1d424cfff08b 543f6d6ab63d ffe6fa954ae5 e6f848e18f0b 1473e4b7da49 68d62fa6bb16 f4525153957f f5d37cbbada1]
	I0419 12:37:03.702632    9133 ssh_runner.go:195] Run: docker stop bbaf072b49f4 a61b8c85e5f1 1d424cfff08b 543f6d6ab63d ffe6fa954ae5 e6f848e18f0b 1473e4b7da49 68d62fa6bb16 f4525153957f f5d37cbbada1
	I0419 12:37:03.714111    9133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0419 12:37:03.807548    9133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:37:03.811862    9133 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Apr 19 19:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Apr 19 19:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Apr 19 19:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Apr 19 19:36 /etc/kubernetes/scheduler.conf
	
	I0419 12:37:03.811896    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/admin.conf
	I0419 12:37:03.815689    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:37:03.815720    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:37:03.819293    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/kubelet.conf
	I0419 12:37:03.822607    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:37:03.822632    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:37:03.825838    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/controller-manager.conf
	I0419 12:37:03.828672    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:37:03.828694    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:37:03.831397    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/scheduler.conf
	I0419 12:37:03.833898    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:37:03.833919    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:37:03.836759    9133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:37:03.839589    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:37:03.861491    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:37:04.297720    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:37:04.566228    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:37:04.687731    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:37:04.736356    9133 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:37:04.736443    9133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:37:05.238812    9133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:37:05.738526    9133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:37:05.743287    9133 api_server.go:72] duration metric: took 1.006955834s to wait for apiserver process to appear ...
	I0419 12:37:05.743295    9133 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:37:05.743304    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:10.745321    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:10.745366    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:15.745585    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:15.745630    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:20.746551    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:20.746633    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:25.747597    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:25.747644    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:30.748587    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:30.748664    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:35.750229    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:35.750317    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:40.752372    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:40.752454    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:45.754969    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:45.755068    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:50.757667    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:50.757742    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:37:55.760248    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:37:55.760303    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:00.762640    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:00.762722    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:05.763273    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:05.763730    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:05.803189    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:05.803324    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:05.825754    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:05.825865    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:05.859524    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:05.859594    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:05.874039    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:05.874108    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:05.886138    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:05.886204    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:05.896568    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:05.896627    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:05.906885    9133 logs.go:276] 0 containers: []
	W0419 12:38:05.906897    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:05.906957    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:05.919304    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:05.919322    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:05.919328    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:05.924472    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:05.924482    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:05.939298    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:05.939312    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:05.956226    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:05.956239    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:05.982367    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:05.982374    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:05.998917    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:05.998927    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:06.035097    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:06.035106    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:06.046838    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:06.046850    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:06.063574    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:06.063585    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:06.084540    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:06.084552    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:06.098047    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:06.098057    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:06.111919    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:06.111932    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:06.123181    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:06.123195    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:06.201945    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:06.201966    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:06.217817    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:06.217827    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:06.232562    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:06.232574    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:06.243733    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:06.243742    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:08.756165    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:13.758874    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:13.759279    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:13.798742    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:13.798873    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:13.819437    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:13.819531    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:13.835976    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:13.836047    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:13.852519    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:13.852590    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:13.863553    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:13.863619    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:13.874124    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:13.874179    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:13.884326    9133 logs.go:276] 0 containers: []
	W0419 12:38:13.884342    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:13.884397    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:13.898251    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:13.898269    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:13.898274    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:13.909578    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:13.909590    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:13.921035    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:13.921045    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:13.932519    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:13.932531    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:13.945121    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:13.945133    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:13.980632    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:13.980639    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:13.984548    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:13.984556    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:13.999939    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:13.999949    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:14.013770    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:14.013779    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:14.029340    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:14.029351    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:14.040585    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:14.040596    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:14.065044    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:14.065051    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:14.079752    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:14.079766    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:14.116479    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:14.116492    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:14.130806    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:14.130815    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:14.152069    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:14.152079    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:14.166346    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:14.166357    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:16.696851    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:21.699617    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:21.700076    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:21.738652    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:21.738778    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:21.761017    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:21.761135    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:21.780939    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:21.781009    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:21.793052    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:21.793121    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:21.803817    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:21.803878    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:21.814527    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:21.814592    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:21.828616    9133 logs.go:276] 0 containers: []
	W0419 12:38:21.828627    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:21.828678    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:21.845769    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:21.845785    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:21.845791    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:21.857307    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:21.857317    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:21.892788    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:21.892801    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:21.907294    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:21.907304    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:21.922823    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:21.922837    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:21.934676    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:21.934690    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:21.948519    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:21.948531    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:21.965488    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:21.965501    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:21.978257    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:21.978268    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:21.990707    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:21.990719    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:21.996265    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:21.996274    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:22.009950    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:22.009965    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:22.030567    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:22.030579    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:22.042618    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:22.042629    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:22.078264    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:22.078271    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:22.089867    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:22.089876    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:22.101526    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:22.101538    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:24.629375    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:29.632087    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:29.632442    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:29.669734    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:29.669853    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:29.694156    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:29.694241    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:29.707643    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:29.707715    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:29.720222    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:29.720291    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:29.730813    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:29.730878    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:29.741453    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:29.741525    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:29.751921    9133 logs.go:276] 0 containers: []
	W0419 12:38:29.751936    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:29.751991    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:29.762164    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:29.762183    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:29.762189    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:29.780578    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:29.780592    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:29.794220    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:29.794235    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:29.805471    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:29.805482    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:29.816683    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:29.816693    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:29.828286    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:29.828299    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:29.840004    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:29.840015    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:29.844177    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:29.844185    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:29.881364    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:29.881379    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:29.896877    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:29.896887    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:29.914268    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:29.914282    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:29.949792    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:29.949801    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:29.964670    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:29.964681    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:29.977169    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:29.977193    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:29.988547    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:29.988557    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:30.002697    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:30.002709    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:30.023336    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:30.023346    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:32.550664    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:37.553336    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:37.553747    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:37.596395    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:37.596538    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:37.618365    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:37.618474    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:37.633840    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:37.633915    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:37.646718    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:37.646788    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:37.657439    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:37.657497    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:37.668289    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:37.668344    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:37.678509    9133 logs.go:276] 0 containers: []
	W0419 12:38:37.678522    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:37.678574    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:37.688667    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:37.688685    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:37.688690    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:37.703205    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:37.703215    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:37.720159    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:37.720168    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:37.757299    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:37.757309    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:37.769015    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:37.769026    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:37.788596    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:37.788608    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:37.803864    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:37.803874    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:37.819414    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:37.819426    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:37.831127    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:37.831137    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:37.843160    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:37.843172    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:37.859421    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:37.859430    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:37.863764    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:37.863769    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:37.877947    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:37.877959    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:37.891465    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:37.891482    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:37.906778    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:37.906793    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:37.919461    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:37.919473    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:37.943377    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:37.943384    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:40.479636    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:45.480615    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:45.480951    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:45.510676    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:45.510802    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:45.530088    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:45.530172    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:45.544054    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:45.544128    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:45.560641    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:45.560715    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:45.570844    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:45.570944    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:45.588787    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:45.588863    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:45.599790    9133 logs.go:276] 0 containers: []
	W0419 12:38:45.599800    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:45.599849    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:45.611032    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:45.611049    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:45.611054    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:45.624909    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:45.624919    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:45.638775    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:45.638786    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:45.650742    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:45.650754    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:45.668506    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:45.668518    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:45.680798    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:45.680808    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:45.715470    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:45.715479    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:45.736029    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:45.736043    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:45.755665    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:45.755681    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:45.767026    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:45.767037    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:45.780465    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:45.780477    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:45.793397    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:45.793411    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:45.828566    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:45.828572    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:45.832781    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:45.832787    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:45.844103    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:45.844113    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:45.860985    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:45.860995    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:45.873221    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:45.873233    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:48.401280    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:38:53.403739    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:38:53.403955    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:38:53.420612    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:38:53.420683    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:38:53.440603    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:38:53.440663    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:38:53.451377    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:38:53.451441    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:38:53.462204    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:38:53.462261    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:38:53.472785    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:38:53.472851    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:38:53.483001    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:38:53.483058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:38:53.493525    9133 logs.go:276] 0 containers: []
	W0419 12:38:53.493535    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:38:53.493583    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:38:53.505285    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:38:53.505303    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:38:53.505310    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:38:53.520795    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:38:53.520805    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:38:53.534022    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:38:53.534034    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:38:53.546263    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:38:53.546275    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:38:53.581982    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:38:53.581990    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:38:53.605728    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:38:53.605738    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:38:53.616906    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:38:53.616917    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:38:53.627970    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:38:53.627986    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:38:53.661775    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:38:53.661788    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:38:53.675970    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:38:53.675982    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:38:53.692234    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:38:53.692246    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:38:53.707340    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:38:53.707354    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:38:53.724285    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:38:53.724296    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:38:53.737154    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:38:53.737165    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:38:53.741955    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:38:53.741965    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:38:53.753472    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:38:53.753484    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:38:53.764446    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:38:53.764456    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:38:56.289229    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:01.291974    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:01.292332    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:01.323711    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:01.323850    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:01.342519    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:01.342623    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:01.356649    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:01.356729    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:01.368624    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:01.368694    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:01.379491    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:01.379570    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:01.389809    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:01.389893    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:01.404079    9133 logs.go:276] 0 containers: []
	W0419 12:39:01.404093    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:01.404167    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:01.414083    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:01.414101    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:01.414107    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:01.428720    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:01.428730    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:01.439639    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:01.439650    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:01.450987    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:01.450998    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:01.463313    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:01.463325    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:01.499112    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:01.499123    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:01.516610    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:01.516622    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:01.528688    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:01.528698    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:01.543975    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:01.543988    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:01.561325    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:01.561335    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:01.574145    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:01.574159    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:01.585802    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:01.585815    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:01.590656    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:01.590665    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:01.624028    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:01.624041    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:01.647811    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:01.647825    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:01.662406    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:01.662419    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:01.673763    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:01.673776    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:04.199834    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:09.202588    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:09.202962    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:09.232395    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:09.232505    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:09.251644    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:09.251730    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:09.272436    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:09.272512    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:09.286521    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:09.286584    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:09.296909    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:09.296974    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:09.307349    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:09.307410    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:09.317707    9133 logs.go:276] 0 containers: []
	W0419 12:39:09.317719    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:09.317775    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:09.327986    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:09.328005    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:09.328010    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:09.343517    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:09.343528    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:09.378020    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:09.378033    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:09.390211    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:09.390222    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:09.407466    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:09.407477    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:09.430787    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:09.430796    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:09.443008    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:09.443021    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:09.456671    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:09.456682    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:09.460795    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:09.460803    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:09.474649    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:09.474662    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:09.488419    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:09.488431    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:09.502617    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:09.502626    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:09.517846    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:09.517854    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:09.529544    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:09.529555    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:09.565876    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:09.565889    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:09.585917    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:09.585929    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:09.597643    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:09.597653    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:12.111253    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:17.113970    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:17.114303    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:17.153754    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:17.153865    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:17.182548    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:17.182621    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:17.200859    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:17.200914    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:17.212256    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:17.212325    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:17.222635    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:17.222698    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:17.233061    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:17.233126    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:17.242673    9133 logs.go:276] 0 containers: []
	W0419 12:39:17.242684    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:17.242735    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:17.253421    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:17.253441    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:17.253446    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:17.272540    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:17.272555    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:17.290840    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:17.290855    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:17.302523    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:17.302533    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:17.337878    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:17.337888    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:17.352844    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:17.352855    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:17.366329    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:17.366341    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:17.379066    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:17.379081    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:17.394990    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:17.394999    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:17.406462    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:17.406472    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:17.440663    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:17.440670    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:17.454641    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:17.454651    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:17.465555    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:17.465565    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:17.476802    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:17.476814    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:17.488867    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:17.488879    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:17.492994    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:17.493001    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:17.517110    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:17.517115    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:20.041526    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:25.041884    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:25.042046    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:25.058234    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:25.058301    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:25.070681    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:25.070749    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:25.082424    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:25.082494    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:25.094406    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:25.094471    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:25.106725    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:25.106797    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:25.119476    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:25.119543    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:25.130524    9133 logs.go:276] 0 containers: []
	W0419 12:39:25.130557    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:25.130623    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:25.142018    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:25.142037    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:25.142043    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:25.146650    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:25.146656    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:25.161448    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:25.161458    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:25.181090    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:25.181100    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:25.205396    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:25.205404    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:25.217548    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:25.217563    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:25.237929    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:25.237941    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:25.249618    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:25.249631    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:25.265762    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:25.265773    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:25.281623    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:25.281634    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:25.294634    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:25.294644    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:25.306633    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:25.306645    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:25.318048    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:25.318058    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:25.355348    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:25.355359    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:25.392590    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:25.392601    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:25.406986    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:25.406996    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:25.419110    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:25.419120    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:27.942930    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:32.945518    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:32.945706    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:32.957934    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:32.958008    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:32.968746    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:32.968808    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:32.979415    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:32.979482    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:32.990421    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:32.990478    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:33.002179    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:33.002237    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:33.012700    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:33.012756    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:33.022588    9133 logs.go:276] 0 containers: []
	W0419 12:39:33.022599    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:33.022651    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:33.033210    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:33.033230    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:33.033235    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:33.050065    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:33.050076    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:33.075486    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:33.075495    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:33.111800    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:33.111807    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:33.130935    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:33.130944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:33.149186    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:33.149196    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:33.160362    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:33.160375    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:33.182557    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:33.182568    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:33.201811    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:33.201824    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:33.213949    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:33.213959    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:33.226327    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:33.226339    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:33.237863    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:33.237875    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:33.249976    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:33.249986    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:33.261048    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:33.261060    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:33.265404    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:33.265411    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:33.300663    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:33.300673    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:33.314868    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:33.314878    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:35.832160    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:40.834254    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:40.834437    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:40.846136    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:40.846197    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:40.856980    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:40.857037    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:40.867125    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:40.867191    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:40.878029    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:40.878094    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:40.892613    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:40.892675    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:40.908854    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:40.908918    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:40.918920    9133 logs.go:276] 0 containers: []
	W0419 12:39:40.918931    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:40.918981    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:40.929749    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:40.929766    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:40.929772    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:40.945658    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:40.945669    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:40.950059    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:40.950064    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:40.985289    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:40.985302    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:40.999367    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:40.999380    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:41.011021    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:41.011031    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:41.031419    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:41.031427    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:41.042424    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:41.042435    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:41.054283    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:41.054295    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:41.079452    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:41.079459    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:41.091713    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:41.091726    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:41.109466    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:41.109479    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:41.123079    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:41.123091    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:41.137101    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:41.137113    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:41.152059    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:41.152073    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:41.167614    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:41.167627    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:41.203100    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:41.203110    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:43.716196    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:48.718158    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:48.718274    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:48.738302    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:48.738378    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:48.751046    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:48.751118    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:48.769811    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:48.769886    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:48.782238    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:48.782313    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:48.794213    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:48.794283    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:48.811176    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:48.811250    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:48.823495    9133 logs.go:276] 0 containers: []
	W0419 12:39:48.823509    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:48.823575    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:48.838327    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:48.838350    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:48.838357    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:48.852495    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:48.852507    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:48.857764    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:48.857777    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:48.879407    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:48.879424    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:48.898656    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:48.898670    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:48.920369    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:48.920387    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:48.958248    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:48.958265    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:48.971032    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:48.971045    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:48.984502    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:48.984518    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:49.012990    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:49.013012    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:49.026229    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:49.026243    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:49.069529    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:49.069541    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:49.084961    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:49.084978    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:49.097310    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:49.097325    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:49.109342    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:49.109354    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:49.124155    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:49.124168    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:49.139255    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:49.139265    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:51.656706    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:39:56.657466    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:39:56.657853    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:39:56.694779    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:39:56.694909    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:39:56.713746    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:39:56.713860    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:39:56.727681    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:39:56.727763    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:39:56.739662    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:39:56.739736    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:39:56.750408    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:39:56.750481    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:39:56.765023    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:39:56.765084    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:39:56.775590    9133 logs.go:276] 0 containers: []
	W0419 12:39:56.775604    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:39:56.775661    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:39:56.804031    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:39:56.804049    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:39:56.804055    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:39:56.808332    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:39:56.808339    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:39:56.822129    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:39:56.822138    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:39:56.836266    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:39:56.836276    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:39:56.850900    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:39:56.850910    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:39:56.862461    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:39:56.862473    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:39:56.878198    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:39:56.878208    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:39:56.889598    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:39:56.889607    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:39:56.923900    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:39:56.923913    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:39:56.935153    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:39:56.935164    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:39:56.946802    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:39:56.946814    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:39:56.964254    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:39:56.964264    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:39:56.977773    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:39:56.977784    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:39:57.001742    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:39:57.001750    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:39:57.036128    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:39:57.036137    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:39:57.055815    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:39:57.055825    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:39:57.068058    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:39:57.068071    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:39:59.582205    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:04.584288    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:04.584391    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:04.595173    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:04.595244    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:04.606200    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:04.606266    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:04.616871    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:04.616941    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:04.627111    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:04.627181    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:04.638222    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:04.638295    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:04.650464    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:04.650531    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:04.660437    9133 logs.go:276] 0 containers: []
	W0419 12:40:04.660447    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:04.660501    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:04.671031    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:04.671049    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:04.671055    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:04.684850    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:04.684863    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:04.697238    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:04.697248    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:04.708666    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:04.708676    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:04.743889    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:04.743899    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:04.758083    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:04.758095    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:04.770014    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:04.770026    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:04.788533    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:04.788545    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:04.800544    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:04.800555    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:04.823537    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:04.823545    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:04.827888    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:04.827894    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:04.846697    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:04.846710    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:04.866686    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:04.866699    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:04.885285    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:04.885297    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:04.897722    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:04.897734    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:04.932663    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:04.932674    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:04.947735    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:04.947749    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:07.466111    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:12.468785    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:12.469213    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:12.506618    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:12.506749    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:12.528106    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:12.528217    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:12.542931    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:12.543004    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:12.555268    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:12.555338    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:12.565943    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:12.566011    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:12.576815    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:12.576891    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:12.587402    9133 logs.go:276] 0 containers: []
	W0419 12:40:12.587412    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:12.587466    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:12.604131    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:12.604148    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:12.604154    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:12.618313    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:12.618323    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:12.633503    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:12.633515    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:12.656177    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:12.656185    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:12.667519    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:12.667529    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:12.671809    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:12.671817    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:12.704767    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:12.704777    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:12.726178    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:12.726192    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:12.741359    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:12.741368    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:12.752645    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:12.752654    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:12.788595    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:12.788610    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:12.803880    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:12.803889    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:12.816016    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:12.816030    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:12.829308    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:12.829321    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:12.840979    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:12.840993    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:12.852728    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:12.852738    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:12.864063    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:12.864077    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:15.384072    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:20.384813    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:20.385003    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:20.403868    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:20.403949    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:20.419262    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:20.419325    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:20.429221    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:20.429289    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:20.440182    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:20.440250    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:20.449754    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:20.449821    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:20.459874    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:20.459939    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:20.469447    9133 logs.go:276] 0 containers: []
	W0419 12:40:20.469461    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:20.469508    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:20.480236    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:20.480253    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:20.480257    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:20.503137    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:20.503147    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:20.514532    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:20.514547    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:20.534350    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:20.534363    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:20.548806    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:20.548818    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:20.560293    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:20.560304    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:20.594138    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:20.594151    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:20.607708    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:20.607718    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:20.620809    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:20.620821    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:20.634580    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:20.634592    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:20.645890    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:20.645903    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:20.666648    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:20.666658    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:20.677943    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:20.677958    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:20.713435    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:20.713442    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:20.724161    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:20.724173    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:20.739253    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:20.739264    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:20.750708    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:20.750717    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:23.257099    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:28.259568    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:28.259684    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:28.271523    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:28.271591    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:28.282534    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:28.282601    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:28.293492    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:28.293559    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:28.304412    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:28.304476    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:28.315864    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:28.315930    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:28.326593    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:28.326657    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:28.336731    9133 logs.go:276] 0 containers: []
	W0419 12:40:28.336741    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:28.336786    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:28.347468    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:28.347487    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:28.347493    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:28.361372    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:28.361382    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:28.382357    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:28.382369    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:28.394567    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:28.394578    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:28.410210    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:28.410221    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:28.448894    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:28.448917    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:28.491245    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:28.491257    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:28.504782    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:28.504798    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:28.519376    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:28.519392    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:28.533016    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:28.533034    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:28.549556    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:28.549577    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:28.573445    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:28.573467    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:28.586968    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:28.586983    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:28.591863    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:28.591874    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:28.607374    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:28.607387    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:28.620794    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:28.620810    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:28.634261    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:28.634273    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:31.160749    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:36.161689    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:36.162138    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:36.198269    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:36.198411    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:36.219375    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:36.219470    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:36.234920    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:36.234990    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:36.247017    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:36.247092    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:36.257459    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:36.257528    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:36.277926    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:36.278005    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:36.291153    9133 logs.go:276] 0 containers: []
	W0419 12:40:36.291165    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:36.291226    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:36.301838    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:36.301857    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:36.301862    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:36.313846    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:36.313859    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:36.337339    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:36.337348    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:36.351963    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:36.351975    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:36.363553    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:36.363566    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:36.380460    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:36.380471    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:36.393841    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:36.393851    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:36.405854    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:36.405864    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:36.417447    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:36.417457    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:36.431413    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:36.431422    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:36.450968    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:36.450977    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:36.465247    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:36.465257    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:36.481448    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:36.481461    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:36.493355    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:36.493368    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:36.527699    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:36.527708    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:36.531764    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:36.531775    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:36.566197    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:36.566211    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:39.080062    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:44.082333    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:44.082700    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:44.125240    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:44.125349    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:44.147655    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:44.147731    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:44.164379    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:44.164450    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:44.176974    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:44.177082    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:44.188720    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:44.188789    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:44.205896    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:44.205971    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:44.216964    9133 logs.go:276] 0 containers: []
	W0419 12:40:44.216979    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:44.217041    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:44.227986    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:44.228007    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:44.228013    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:44.232472    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:44.232481    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:44.248692    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:44.248705    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:44.261144    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:44.261158    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:44.281739    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:44.281750    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:44.299395    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:44.299407    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:44.311486    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:44.311499    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:44.327641    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:44.327651    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:44.346727    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:44.346741    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:44.358185    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:44.358203    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:44.381221    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:44.381233    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:44.397268    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:44.397280    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:44.409764    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:44.409774    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:44.422956    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:44.422971    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:44.459586    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:44.459597    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:44.495445    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:44.495457    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:44.513391    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:44.513403    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:47.027922    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:52.029096    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:52.029199    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:52.041628    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:52.041698    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:52.052032    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:52.052105    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:52.062065    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:52.062134    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:52.072613    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:52.072682    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:52.086886    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:52.086945    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:52.096997    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:52.097054    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:52.107972    9133 logs.go:276] 0 containers: []
	W0419 12:40:52.107983    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:52.108044    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:52.118428    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:52.118445    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:52.118450    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:52.129932    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:52.129944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:52.141424    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:52.141438    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:52.159626    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:52.159642    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:52.173631    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:52.173641    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:52.185547    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:52.185558    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:52.200721    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:52.200731    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:52.217801    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:52.217816    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:52.222254    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:52.222261    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:52.242027    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:52.242037    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:52.253363    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:52.253373    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:52.265549    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:52.265562    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:52.278028    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:52.278042    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:52.289641    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:52.289651    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:52.324725    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:52.324733    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:52.338949    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:52.338963    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:52.361740    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:52.361755    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:54.900473    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:59.902746    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:59.902949    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:59.930379    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:59.930502    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:59.947632    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:59.947722    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:59.964874    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:59.964946    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:59.975565    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:59.975651    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:59.986138    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:59.986200    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:59.997896    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:59.997966    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:41:00.008548    9133 logs.go:276] 0 containers: []
	W0419 12:41:00.008559    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:41:00.008619    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:41:00.019008    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:41:00.019031    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:41:00.019037    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:41:00.023310    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:41:00.023318    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:41:00.035840    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:41:00.035854    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:41:00.047940    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:41:00.047951    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:41:00.058738    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:41:00.058748    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:41:00.083705    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:41:00.083714    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:41:00.119026    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:41:00.119034    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:41:00.154292    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:41:00.154305    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:41:00.169443    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:41:00.169454    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:41:00.183801    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:41:00.183814    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:41:00.203660    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:41:00.203670    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:41:00.214704    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:41:00.214715    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:41:00.226090    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:41:00.226104    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:41:00.240195    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:41:00.240206    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:41:00.255399    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:41:00.255410    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:41:00.273436    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:41:00.273446    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:41:00.290175    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:41:00.290184    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:41:02.803963    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:07.806085    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:07.806166    9133 kubeadm.go:591] duration metric: took 4m4.127296s to restartPrimaryControlPlane
	W0419 12:41:07.806241    9133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0419 12:41:07.806273    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0419 12:41:08.797372    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 12:41:08.802584    9133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:41:08.805611    9133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:41:08.808754    9133 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 12:41:08.808760    9133 kubeadm.go:156] found existing configuration files:
	
	I0419 12:41:08.808787    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/admin.conf
	I0419 12:41:08.811477    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 12:41:08.811497    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:41:08.814334    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/kubelet.conf
	I0419 12:41:08.817614    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 12:41:08.817641    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:41:08.820531    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/controller-manager.conf
	I0419 12:41:08.822983    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 12:41:08.823003    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:41:08.826027    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/scheduler.conf
	I0419 12:41:08.829287    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 12:41:08.829306    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:41:08.832160    9133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 12:41:08.848396    9133 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0419 12:41:08.848438    9133 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 12:41:08.894962    9133 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 12:41:08.895025    9133 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 12:41:08.895074    9133 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 12:41:08.943422    9133 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 12:41:08.948581    9133 out.go:204]   - Generating certificates and keys ...
	I0419 12:41:08.948617    9133 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 12:41:08.948655    9133 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 12:41:08.948701    9133 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 12:41:08.948730    9133 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0419 12:41:08.948758    9133 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0419 12:41:08.948785    9133 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0419 12:41:08.948829    9133 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0419 12:41:08.948857    9133 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0419 12:41:08.948899    9133 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 12:41:08.948934    9133 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 12:41:08.948953    9133 kubeadm.go:309] [certs] Using the existing "sa" key
	I0419 12:41:08.948986    9133 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 12:41:09.023906    9133 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 12:41:09.271693    9133 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 12:41:09.368974    9133 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 12:41:09.460621    9133 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 12:41:09.489843    9133 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 12:41:09.490315    9133 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 12:41:09.490419    9133 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 12:41:09.569870    9133 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 12:41:09.573318    9133 out.go:204]   - Booting up control plane ...
	I0419 12:41:09.573366    9133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 12:41:09.573411    9133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 12:41:09.573449    9133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 12:41:09.573493    9133 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 12:41:09.573814    9133 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0419 12:41:14.076265    9133 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502160 seconds
	I0419 12:41:14.076341    9133 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 12:41:14.079814    9133 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 12:41:14.594912    9133 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 12:41:14.595176    9133 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-311000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 12:41:15.098516    9133 kubeadm.go:309] [bootstrap-token] Using token: sl9qiv.yyect9jtigof15l8
	I0419 12:41:15.104821    9133 out.go:204]   - Configuring RBAC rules ...
	I0419 12:41:15.104881    9133 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 12:41:15.104929    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 12:41:15.108496    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 12:41:15.109413    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 12:41:15.110846    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 12:41:15.112090    9133 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 12:41:15.115162    9133 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 12:41:15.281257    9133 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 12:41:15.501757    9133 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 12:41:15.502174    9133 kubeadm.go:309] 
	I0419 12:41:15.502211    9133 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 12:41:15.502215    9133 kubeadm.go:309] 
	I0419 12:41:15.502249    9133 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 12:41:15.502264    9133 kubeadm.go:309] 
	I0419 12:41:15.502278    9133 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 12:41:15.502315    9133 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 12:41:15.502340    9133 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 12:41:15.502344    9133 kubeadm.go:309] 
	I0419 12:41:15.502378    9133 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 12:41:15.502381    9133 kubeadm.go:309] 
	I0419 12:41:15.502404    9133 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 12:41:15.502407    9133 kubeadm.go:309] 
	I0419 12:41:15.502438    9133 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 12:41:15.502475    9133 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 12:41:15.502511    9133 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 12:41:15.502514    9133 kubeadm.go:309] 
	I0419 12:41:15.502564    9133 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 12:41:15.502603    9133 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 12:41:15.502606    9133 kubeadm.go:309] 
	I0419 12:41:15.502660    9133 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sl9qiv.yyect9jtigof15l8 \
	I0419 12:41:15.502715    9133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 \
	I0419 12:41:15.502728    9133 kubeadm.go:309] 	--control-plane 
	I0419 12:41:15.502730    9133 kubeadm.go:309] 
	I0419 12:41:15.502778    9133 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 12:41:15.502783    9133 kubeadm.go:309] 
	I0419 12:41:15.502826    9133 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sl9qiv.yyect9jtigof15l8 \
	I0419 12:41:15.502884    9133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 
	I0419 12:41:15.502947    9133 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 12:41:15.502954    9133 cni.go:84] Creating CNI manager for ""
	I0419 12:41:15.502963    9133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:41:15.507463    9133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 12:41:15.515406    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 12:41:15.518587    9133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 12:41:15.523421    9133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 12:41:15.523486    9133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 12:41:15.523545    9133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-311000 minikube.k8s.io/updated_at=2024_04_19T12_41_15_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=running-upgrade-311000 minikube.k8s.io/primary=true
	I0419 12:41:15.570814    9133 ops.go:34] apiserver oom_adj: -16
	I0419 12:41:15.570818    9133 kubeadm.go:1107] duration metric: took 47.366458ms to wait for elevateKubeSystemPrivileges
	W0419 12:41:15.570843    9133 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 12:41:15.570847    9133 kubeadm.go:393] duration metric: took 4m11.905434625s to StartCluster
	I0419 12:41:15.570857    9133 settings.go:142] acquiring lock: {Name:mkc28392d1c267200804e17c319a937f73acc262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:41:15.571023    9133 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:41:15.571402    9133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:41:15.571625    9133 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:41:15.576432    9133 out.go:177] * Verifying Kubernetes components...
	I0419 12:41:15.571670    9133 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 12:41:15.571809    9133 config.go:182] Loaded profile config "running-upgrade-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:41:15.584293    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:41:15.584304    9133 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-311000"
	I0419 12:41:15.584312    9133 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-311000"
	I0419 12:41:15.584315    9133 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-311000"
	W0419 12:41:15.584319    9133 addons.go:243] addon storage-provisioner should already be in state true
	I0419 12:41:15.584321    9133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-311000"
	I0419 12:41:15.584337    9133 host.go:66] Checking if "running-upgrade-311000" exists ...
	I0419 12:41:15.585469    9133 kapi.go:59] client config for running-upgrade-311000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063bf980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:41:15.586222    9133 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-311000"
	W0419 12:41:15.586227    9133 addons.go:243] addon default-storageclass should already be in state true
	I0419 12:41:15.586235    9133 host.go:66] Checking if "running-upgrade-311000" exists ...
	I0419 12:41:15.590395    9133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:41:15.594501    9133 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:41:15.594507    9133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 12:41:15.594514    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:41:15.595256    9133 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 12:41:15.595262    9133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 12:41:15.595266    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:41:15.661405    9133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:41:15.666574    9133 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:41:15.666618    9133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:41:15.671248    9133 api_server.go:72] duration metric: took 99.614333ms to wait for apiserver process to appear ...
	I0419 12:41:15.671256    9133 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:41:15.671262    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:15.676787    9133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 12:41:15.677387    9133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:41:20.673324    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:20.673349    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:25.673814    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:25.673865    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:30.674300    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:30.674338    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:35.675146    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:35.675187    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:40.675988    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:40.676027    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:45.677043    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:45.677080    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0419 12:41:46.026163    9133 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0419 12:41:46.030167    9133 out.go:177] * Enabled addons: storage-provisioner
	I0419 12:41:46.038105    9133 addons.go:505] duration metric: took 30.4671385s for enable addons: enabled=[storage-provisioner]
	I0419 12:41:50.678418    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:50.678518    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:55.680479    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:55.680521    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:00.682729    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:00.682770    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:05.684932    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:05.684972    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:10.687128    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:10.687176    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:15.689424    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:15.689561    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:15.706099    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:15.706175    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:15.717438    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:15.717503    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:15.727819    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:15.727880    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:15.738139    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:15.738207    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:15.748038    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:15.748105    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:15.761763    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:15.761833    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:15.771522    9133 logs.go:276] 0 containers: []
	W0419 12:42:15.771533    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:15.771586    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:15.783369    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:15.783382    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:15.783387    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:15.795021    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:15.795032    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:15.813686    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:15.813696    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:15.826017    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:15.826027    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:15.830331    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:15.830337    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:15.844292    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:15.844305    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:15.855315    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:15.855325    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:15.866483    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:15.866495    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:15.891226    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:15.891236    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:15.915448    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:15.915456    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:15.928098    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:15.928111    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:15.962664    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:15.962675    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:15.998435    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:15.998449    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:18.514368    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:23.516654    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:23.516788    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:23.527462    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:23.527537    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:23.538127    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:23.538189    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:23.548950    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:23.549018    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:23.559086    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:23.559153    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:23.569533    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:23.569600    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:23.579787    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:23.579843    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:23.589887    9133 logs.go:276] 0 containers: []
	W0419 12:42:23.589898    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:23.589955    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:23.600199    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:23.600213    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:23.600218    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:23.604983    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:23.604993    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:23.619110    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:23.619121    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:23.643578    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:23.643585    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:23.654820    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:23.654830    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:23.667013    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:23.667023    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:23.686751    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:23.686765    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:23.698413    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:23.698422    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:23.715947    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:23.715958    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:23.751445    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:23.751456    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:23.797329    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:23.797340    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:23.811393    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:23.811405    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:23.823682    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:23.823693    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:26.337172    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:31.339625    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:31.340020    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:31.382521    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:31.382657    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:31.403810    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:31.403923    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:31.419553    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:31.419629    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:31.432294    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:31.432362    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:31.443376    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:31.443447    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:31.454721    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:31.454787    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:31.464849    9133 logs.go:276] 0 containers: []
	W0419 12:42:31.464861    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:31.464918    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:31.475296    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:31.475313    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:31.475318    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:31.480356    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:31.480365    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:31.494673    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:31.494684    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:31.506579    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:31.506588    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:31.517876    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:31.517888    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:31.542420    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:31.542429    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:31.576015    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:31.576025    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:31.615806    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:31.615820    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:31.629720    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:31.629732    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:31.643924    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:31.643936    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:31.659699    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:31.659711    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:31.677240    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:31.677250    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:31.689148    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:31.689159    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:34.202051    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:39.204205    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:39.204394    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:39.223553    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:39.223639    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:39.237246    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:39.237308    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:39.249561    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:39.249631    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:39.260119    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:39.260175    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:39.270395    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:39.270463    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:39.280852    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:39.280924    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:39.291126    9133 logs.go:276] 0 containers: []
	W0419 12:42:39.291136    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:39.291186    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:39.302005    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:39.302018    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:39.302026    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:39.313435    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:39.313445    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:39.338074    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:39.338081    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:39.349158    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:39.349170    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:39.384052    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:39.384061    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:39.420310    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:39.420323    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:39.440208    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:39.440220    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:39.452503    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:39.452513    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:39.470550    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:39.470563    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:39.482511    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:39.482522    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:39.487168    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:39.487175    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:39.501143    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:39.501155    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:39.516726    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:39.516735    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:42.031111    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:47.033255    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:47.033442    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:47.053760    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:47.053847    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:47.068199    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:47.068267    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:47.080216    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:47.080286    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:47.090418    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:47.090479    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:47.100510    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:47.100580    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:47.112203    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:47.112269    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:47.122432    9133 logs.go:276] 0 containers: []
	W0419 12:42:47.122449    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:47.122510    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:47.132582    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:47.132599    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:47.132603    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:47.150335    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:47.150347    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:47.173645    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:47.173653    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:47.185525    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:47.185536    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:47.189727    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:47.189737    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:47.224503    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:47.224513    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:47.238660    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:47.238670    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:47.256738    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:47.256748    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:47.268436    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:47.268446    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:47.284140    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:47.284150    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:47.295702    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:47.295712    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:47.329001    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:47.329010    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:47.339906    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:47.339916    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:49.853737    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:54.856333    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:54.856651    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:54.890424    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:54.890558    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:54.915676    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:54.915760    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:54.930159    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:54.930233    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:54.949954    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:54.950014    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:54.960745    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:54.960812    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:54.975408    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:54.975475    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:54.989970    9133 logs.go:276] 0 containers: []
	W0419 12:42:54.989981    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:54.990038    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:55.000609    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:55.000626    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:55.000632    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:55.035458    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:55.035469    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:55.040404    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:55.040411    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:55.054592    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:55.054606    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:55.069187    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:55.069198    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:55.080638    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:55.080647    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:55.104015    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:55.104026    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:55.115444    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:55.115460    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:55.149460    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:55.149471    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:55.163549    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:55.163559    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:55.175718    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:55.175728    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:55.191519    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:55.191530    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:55.208971    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:55.208981    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:57.722542    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:02.724405    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:02.724642    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:02.753652    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:02.753792    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:02.772429    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:02.772538    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:02.787866    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:02.787957    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:02.800125    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:02.800193    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:02.811243    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:02.811334    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:02.822122    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:02.822201    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:02.832188    9133 logs.go:276] 0 containers: []
	W0419 12:43:02.832199    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:02.832263    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:02.842862    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:02.842876    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:02.842884    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:02.878333    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:02.878344    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:02.892796    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:02.892807    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:02.904601    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:02.904611    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:02.915799    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:02.915809    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:02.928613    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:02.928625    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:02.945463    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:02.945473    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:02.969325    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:02.969335    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:03.002346    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:03.002356    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:03.006689    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:03.006697    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:03.020352    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:03.020362    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:03.032997    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:03.033007    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:03.047649    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:03.047659    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:05.561846    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:10.563983    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:10.564076    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:10.574651    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:10.574716    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:10.585586    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:10.585650    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:10.597647    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:10.597713    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:10.609959    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:10.610022    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:10.620216    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:10.620276    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:10.630275    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:10.630343    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:10.640233    9133 logs.go:276] 0 containers: []
	W0419 12:43:10.640248    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:10.640299    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:10.650650    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:10.650664    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:10.650668    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:10.667836    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:10.667846    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:10.672692    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:10.672701    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:10.707106    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:10.707116    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:10.721047    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:10.721059    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:10.734814    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:10.734825    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:10.745928    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:10.745941    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:10.760441    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:10.760452    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:10.793681    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:10.793691    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:10.805140    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:10.805150    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:10.816767    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:10.816777    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:10.828391    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:10.828401    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:10.852652    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:10.852661    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:13.366082    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:18.368368    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:18.368532    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:18.387039    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:18.387116    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:18.399912    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:18.399981    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:18.411352    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:18.411423    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:18.422073    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:18.422139    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:18.432506    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:18.432571    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:18.446045    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:18.446112    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:18.456360    9133 logs.go:276] 0 containers: []
	W0419 12:43:18.456371    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:18.456432    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:18.466824    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:18.466839    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:18.466845    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:18.478156    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:18.478167    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:18.497239    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:18.497249    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:18.514566    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:18.514580    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:18.526077    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:18.526088    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:18.561999    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:18.562010    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:18.576500    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:18.576511    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:18.589781    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:18.589792    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:18.601358    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:18.601371    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:18.613428    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:18.613438    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:18.629828    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:18.629838    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:18.654052    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:18.654060    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:18.687409    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:18.687415    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:21.194035    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:26.196371    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:26.196677    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:26.231910    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:26.232070    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:26.251731    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:26.251808    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:26.265890    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:26.265961    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:26.277989    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:26.278055    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:26.288681    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:26.288741    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:26.299435    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:26.299501    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:26.309404    9133 logs.go:276] 0 containers: []
	W0419 12:43:26.309415    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:26.309479    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:26.320027    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:26.320041    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:26.320046    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:26.331987    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:26.331999    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:26.336883    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:26.336889    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:26.350677    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:26.350691    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:26.362197    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:26.362206    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:26.373845    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:26.373855    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:26.388494    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:26.388504    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:26.405708    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:26.405721    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:26.417298    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:26.417307    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:26.441920    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:26.441936    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:26.476509    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:26.476516    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:26.512135    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:26.512145    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:26.529661    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:26.529672    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:29.044372    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:34.046659    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:34.046788    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:34.061241    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:34.061318    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:34.073173    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:34.073241    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:34.084482    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:34.084551    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:34.094781    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:34.094853    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:34.105026    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:34.105102    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:34.117060    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:34.117123    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:34.127342    9133 logs.go:276] 0 containers: []
	W0419 12:43:34.127353    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:34.127400    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:34.143040    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:34.143058    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:34.143066    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:34.154501    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:34.154520    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:34.171409    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:34.171424    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:34.183177    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:34.183186    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:34.194525    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:34.194535    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:34.227410    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:34.227418    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:34.262843    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:34.262857    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:34.278796    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:34.278807    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:34.290467    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:34.290479    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:34.305378    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:34.305388    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:34.317863    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:34.317872    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:34.329458    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:34.329468    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:34.333798    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:34.333804    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:34.347546    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:34.347556    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:34.358776    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:34.358789    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:36.886039    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:41.888687    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:41.889108    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:41.923886    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:41.924018    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:41.943933    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:41.944026    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:41.960344    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:41.960446    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:41.975015    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:41.975079    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:41.986089    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:41.986162    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:41.997392    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:41.997463    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:42.007706    9133 logs.go:276] 0 containers: []
	W0419 12:43:42.007720    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:42.007777    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:42.022490    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:42.022511    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:42.022516    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:42.039091    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:42.039105    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:42.073454    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:42.073468    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:42.091976    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:42.091986    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:42.104528    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:42.104541    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:42.109061    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:42.109071    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:42.124079    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:42.124089    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:42.137987    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:42.137997    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:42.149072    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:42.149083    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:42.161844    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:42.161855    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:42.195603    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:42.195610    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:42.207590    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:42.207600    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:42.219686    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:42.219696    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:42.235320    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:42.235333    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:42.260186    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:42.260193    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:44.778442    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:49.781015    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:49.781519    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:49.820156    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:49.820282    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:49.839922    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:49.840009    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:49.855173    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:49.855244    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:49.868068    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:49.868132    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:49.878705    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:49.878769    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:49.889467    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:49.889525    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:49.900540    9133 logs.go:276] 0 containers: []
	W0419 12:43:49.900553    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:49.900606    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:49.911274    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:49.911288    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:49.911293    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:49.923897    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:49.923908    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:49.938656    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:49.938666    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:49.950140    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:49.950149    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:49.984392    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:49.984402    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:49.999315    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:49.999329    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:50.013618    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:50.013629    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:50.025249    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:50.025260    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:50.037520    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:50.037532    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:50.051533    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:50.051544    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:50.063235    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:50.063245    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:50.087404    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:50.087416    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:50.092268    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:50.092276    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:50.127355    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:50.127366    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:50.139157    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:50.139166    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:52.658313    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:57.658541    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:57.658661    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:57.670754    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:57.670830    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:57.681971    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:57.682043    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:57.693078    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:57.693143    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:57.703299    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:57.703358    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:57.714002    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:57.714058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:57.724291    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:57.724350    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:57.734154    9133 logs.go:276] 0 containers: []
	W0419 12:43:57.734164    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:57.734210    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:57.744372    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:57.744389    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:57.744394    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:57.755566    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:57.755575    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:57.767769    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:57.767778    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:57.782155    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:57.782164    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:57.796489    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:57.796502    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:57.807679    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:57.807689    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:57.823358    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:57.823369    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:57.848200    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:57.848214    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:57.882420    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:57.882431    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:57.897581    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:57.897592    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:57.909434    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:57.909447    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:57.922152    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:57.922162    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:57.946077    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:57.946086    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:57.979704    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:57.979715    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:57.984402    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:57.984409    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:00.500237    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:05.502588    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:05.502750    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:05.514611    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:05.514681    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:05.527247    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:05.527315    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:05.545451    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:05.545519    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:05.555688    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:05.555754    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:05.569718    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:05.569781    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:05.589583    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:05.589645    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:05.599575    9133 logs.go:276] 0 containers: []
	W0419 12:44:05.599589    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:05.599639    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:05.609813    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:05.609830    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:05.609835    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:05.643714    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:05.643725    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:05.658505    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:05.658516    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:05.670530    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:05.670545    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:05.682239    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:05.682249    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:05.715451    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:05.715459    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:05.726512    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:05.726521    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:05.740119    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:05.740130    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:05.751815    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:05.751826    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:05.763066    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:05.763077    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:05.781160    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:05.781171    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:05.804654    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:05.804662    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:05.815989    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:05.815998    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:05.820420    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:05.820428    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:05.835241    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:05.835252    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:08.349376    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:13.351894    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:13.352251    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:13.385784    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:13.385908    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:13.403445    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:13.403526    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:13.417705    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:13.417783    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:13.429777    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:13.429845    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:13.441050    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:13.441119    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:13.452995    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:13.453058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:13.463301    9133 logs.go:276] 0 containers: []
	W0419 12:44:13.463314    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:13.463369    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:13.474009    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:13.474024    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:13.474029    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:13.478632    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:13.478640    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:13.514973    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:13.514983    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:13.530667    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:13.530679    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:13.554907    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:13.554916    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:13.566723    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:13.566734    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:13.603640    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:13.603652    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:13.618254    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:13.618264    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:13.630144    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:13.630154    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:13.649107    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:13.649118    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:13.660759    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:13.660769    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:13.678089    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:13.678104    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:13.689983    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:13.689992    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:13.701818    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:13.701830    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:13.719946    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:13.719955    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:16.233968    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:21.236568    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:21.236828    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:21.263880    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:21.263998    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:21.282226    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:21.282309    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:21.297529    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:21.297602    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:21.308877    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:21.308942    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:21.319161    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:21.319223    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:21.332717    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:21.332781    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:21.342861    9133 logs.go:276] 0 containers: []
	W0419 12:44:21.342873    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:21.342924    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:21.353390    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:21.353407    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:21.353413    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:21.388592    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:21.388603    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:21.393137    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:21.393146    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:21.404935    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:21.404944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:21.416414    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:21.416425    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:21.450737    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:21.450752    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:21.462305    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:21.462318    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:21.474934    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:21.474945    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:21.500231    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:21.500240    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:21.514454    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:21.514465    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:21.526300    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:21.526312    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:21.538255    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:21.538266    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:21.555997    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:21.556007    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:21.571007    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:21.571017    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:21.583329    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:21.583339    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:24.099720    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:29.102001    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:29.102167    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:29.119314    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:29.119395    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:29.132564    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:29.132632    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:29.143495    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:29.143560    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:29.160107    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:29.160171    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:29.170116    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:29.170179    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:29.184998    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:29.185058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:29.194995    9133 logs.go:276] 0 containers: []
	W0419 12:44:29.195009    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:29.195062    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:29.205390    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:29.205406    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:29.205411    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:29.239686    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:29.239696    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:29.258108    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:29.258121    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:29.270153    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:29.270165    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:29.295517    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:29.295527    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:29.309346    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:29.309359    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:29.320805    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:29.320819    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:29.332983    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:29.332994    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:29.349985    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:29.349996    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:29.363691    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:29.363704    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:29.368041    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:29.368050    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:29.403273    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:29.403286    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:29.426301    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:29.426311    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:29.438558    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:29.438569    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:29.450120    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:29.450133    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:31.967291    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:36.969763    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:36.969894    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:36.984893    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:36.984992    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:36.997218    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:36.997288    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:37.012126    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:37.012194    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:37.023411    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:37.023480    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:37.034008    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:37.034073    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:37.044718    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:37.044809    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:37.054638    9133 logs.go:276] 0 containers: []
	W0419 12:44:37.054649    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:37.054702    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:37.065610    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:37.065627    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:37.065633    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:37.080932    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:37.080944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:37.094912    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:37.094922    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:37.107289    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:37.107302    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:37.124642    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:37.124653    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:37.135980    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:37.135991    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:37.168862    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:37.168874    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:37.203735    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:37.203747    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:37.215336    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:37.215347    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:37.227368    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:37.227378    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:37.231945    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:37.231954    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:37.243632    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:37.243645    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:37.267731    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:37.267742    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:37.280598    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:37.280609    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:37.297981    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:37.297991    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:39.809871    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:44.812296    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:44.812647    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:44.842287    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:44.842410    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:44.860954    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:44.861031    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:44.876685    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:44.876749    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:44.888357    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:44.888427    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:44.899190    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:44.899267    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:44.910853    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:44.910923    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:44.921718    9133 logs.go:276] 0 containers: []
	W0419 12:44:44.921730    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:44.921787    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:44.933601    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:44.933621    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:44.933626    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:44.966971    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:44.966979    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:44.985188    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:44.985200    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:45.008288    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:45.008297    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:45.019902    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:45.019913    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:45.044278    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:45.044285    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:45.055901    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:45.055912    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:45.060264    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:45.060271    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:45.095784    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:45.095794    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:45.114423    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:45.114436    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:45.128355    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:45.128366    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:45.139963    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:45.139973    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:45.151341    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:45.151353    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:45.164300    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:45.164311    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:45.175980    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:45.175994    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:47.689577    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:52.690771    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:52.690865    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:52.702600    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:52.702674    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:52.714635    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:52.714712    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:52.729325    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:52.729400    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:52.742900    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:52.742990    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:52.755946    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:52.756019    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:52.767607    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:52.767673    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:52.781658    9133 logs.go:276] 0 containers: []
	W0419 12:44:52.781669    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:52.781725    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:52.793047    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:52.793065    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:52.793073    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:52.798438    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:52.798449    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:52.814344    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:52.814359    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:52.830461    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:52.830476    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:52.844916    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:52.844931    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:52.866122    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:52.866135    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:52.879332    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:52.879346    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:52.892295    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:52.892309    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:52.926609    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:52.926626    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:52.944443    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:52.944454    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:52.969938    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:52.969953    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:53.008096    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:53.008116    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:53.046221    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:53.046233    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:53.058544    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:53.058559    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:53.071423    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:53.071436    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:55.586530    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:00.588763    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:00.588929    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:45:00.600313    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:45:00.600377    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:45:00.610683    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:45:00.610753    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:45:00.621442    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:45:00.621501    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:45:00.631278    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:45:00.631337    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:45:00.641752    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:45:00.641808    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:45:00.652188    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:45:00.652245    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:45:00.662597    9133 logs.go:276] 0 containers: []
	W0419 12:45:00.662610    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:45:00.662666    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:45:00.673270    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:45:00.673289    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:45:00.673294    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:45:00.686065    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:45:00.686077    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:45:00.704107    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:45:00.704119    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:45:00.728392    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:45:00.728400    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:45:00.767108    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:45:00.767120    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:45:00.787305    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:45:00.787316    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:45:00.798926    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:45:00.798937    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:45:00.810515    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:45:00.810528    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:45:00.822535    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:45:00.822546    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:45:00.827075    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:45:00.827085    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:45:00.841494    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:45:00.841507    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:45:00.864859    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:45:00.864873    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:45:00.898560    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:45:00.898570    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:45:00.910443    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:45:00.910456    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:45:00.924985    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:45:00.924997    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:45:03.438588    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:08.440656    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:08.440807    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:45:08.463321    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:45:08.463395    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:45:08.474732    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:45:08.474803    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:45:08.486162    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:45:08.486233    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:45:08.504268    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:45:08.504336    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:45:08.514545    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:45:08.514610    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:45:08.525161    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:45:08.525237    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:45:08.535298    9133 logs.go:276] 0 containers: []
	W0419 12:45:08.535309    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:45:08.535360    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:45:08.545754    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:45:08.545775    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:45:08.545781    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:45:08.560762    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:45:08.560773    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:45:08.572428    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:45:08.572439    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:45:08.587166    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:45:08.587177    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:45:08.620595    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:45:08.620608    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:45:08.631882    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:45:08.631892    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:45:08.644210    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:45:08.644221    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:45:08.662118    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:45:08.662129    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:45:08.675940    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:45:08.675954    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:45:08.691253    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:45:08.691265    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:45:08.703674    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:45:08.703686    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:45:08.726078    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:45:08.726085    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:45:08.730742    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:45:08.730748    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:45:08.767535    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:45:08.767554    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:45:08.783997    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:45:08.784007    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:45:11.297654    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:16.299866    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:16.302689    9133 out.go:177] 
	W0419 12:45:16.306695    9133 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0419 12:45:16.306706    9133 out.go:239] * 
	* 
	W0419 12:45:16.307332    9133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:45:16.320690    9133 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-311000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-04-19 12:45:16.397471 -0700 PDT m=+1325.971989876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-311000 -n running-upgrade-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-311000 -n running-upgrade-311000: exit status 2 (15.677441167s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-311000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-767000          | force-systemd-flag-767000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-617000              | force-systemd-env-617000  | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-617000           | force-systemd-env-617000  | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT | 19 Apr 24 12:35 PDT |
	| start   | -p docker-flags-060000                | docker-flags-060000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-767000             | force-systemd-flag-767000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-767000          | force-systemd-flag-767000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT | 19 Apr 24 12:35 PDT |
	| start   | -p cert-expiration-455000             | cert-expiration-455000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-060000 ssh               | docker-flags-060000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-060000 ssh               | docker-flags-060000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-060000                | docker-flags-060000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT | 19 Apr 24 12:35 PDT |
	| start   | -p cert-options-712000                | cert-options-712000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:35 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-712000 ssh               | cert-options-712000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:36 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-712000 -- sudo        | cert-options-712000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:36 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-712000                | cert-options-712000       | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:36 PDT | 19 Apr 24 12:36 PDT |
	| start   | -p running-upgrade-311000             | minikube                  | jenkins | v1.26.0        | 19 Apr 24 12:36 PDT | 19 Apr 24 12:36 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-311000             | running-upgrade-311000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:36 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-455000             | cert-expiration-455000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:38 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-455000             | cert-expiration-455000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:39 PDT | 19 Apr 24 12:39 PDT |
	| start   | -p kubernetes-upgrade-777000          | kubernetes-upgrade-777000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:39 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-777000          | kubernetes-upgrade-777000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:39 PDT | 19 Apr 24 12:39 PDT |
	| start   | -p kubernetes-upgrade-777000          | kubernetes-upgrade-777000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:39 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-777000          | kubernetes-upgrade-777000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:39 PDT | 19 Apr 24 12:39 PDT |
	| start   | -p stopped-upgrade-860000             | minikube                  | jenkins | v1.26.0        | 19 Apr 24 12:39 PDT | 19 Apr 24 12:40 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-860000 stop           | minikube                  | jenkins | v1.26.0        | 19 Apr 24 12:40 PDT | 19 Apr 24 12:40 PDT |
	| start   | -p stopped-upgrade-860000             | stopped-upgrade-860000    | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:40 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 12:40:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 12:40:17.739640    9295 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:40:17.739783    9295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:40:17.739787    9295 out.go:304] Setting ErrFile to fd 2...
	I0419 12:40:17.739790    9295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:40:17.739936    9295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:40:17.741014    9295 out.go:298] Setting JSON to false
	I0419 12:40:17.759699    9295 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5988,"bootTime":1713549629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:40:17.759764    9295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:40:17.764654    9295 out.go:177] * [stopped-upgrade-860000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:40:17.770669    9295 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:40:17.770745    9295 notify.go:220] Checking for updates...
	I0419 12:40:17.774601    9295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:40:17.777606    9295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:40:17.780672    9295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:40:17.783612    9295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:40:17.786648    9295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:40:17.789984    9295 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:40:17.793544    9295 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0419 12:40:17.796626    9295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:40:17.800609    9295 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:40:17.807649    9295 start.go:297] selected driver: qemu2
	I0419 12:40:17.807656    9295 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:40:17.807718    9295 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:40:17.810339    9295 cni.go:84] Creating CNI manager for ""
	I0419 12:40:17.810364    9295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:40:17.810402    9295 start.go:340] cluster config:
	{Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:40:17.810455    9295 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:40:17.815562    9295 out.go:177] * Starting "stopped-upgrade-860000" primary control-plane node in "stopped-upgrade-860000" cluster
	I0419 12:40:17.819606    9295 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0419 12:40:17.819622    9295 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0419 12:40:17.819629    9295 cache.go:56] Caching tarball of preloaded images
	I0419 12:40:17.819702    9295 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:40:17.819707    9295 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0419 12:40:17.819769    9295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/config.json ...
	I0419 12:40:17.820227    9295 start.go:360] acquireMachinesLock for stopped-upgrade-860000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:40:17.820283    9295 start.go:364] duration metric: took 47.167µs to acquireMachinesLock for "stopped-upgrade-860000"
	I0419 12:40:17.820293    9295 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:40:17.820297    9295 fix.go:54] fixHost starting: 
	I0419 12:40:17.820419    9295 fix.go:112] recreateIfNeeded on stopped-upgrade-860000: state=Stopped err=<nil>
	W0419 12:40:17.820428    9295 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:40:17.828588    9295 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-860000" ...
	I0419 12:40:15.384072    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:17.832516    9295 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51412-:22,hostfwd=tcp::51413-:2376,hostname=stopped-upgrade-860000 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/disk.qcow2
	I0419 12:40:17.880681    9295 main.go:141] libmachine: STDOUT: 
	I0419 12:40:17.880703    9295 main.go:141] libmachine: STDERR: 
	I0419 12:40:17.880709    9295 main.go:141] libmachine: Waiting for VM to start (ssh -p 51412 docker@127.0.0.1)...
	I0419 12:40:20.384813    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:20.385003    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:20.403868    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:20.403949    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:20.419262    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:20.419325    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:20.429221    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:20.429289    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:20.440182    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:20.440250    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:20.449754    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:20.449821    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:20.459874    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:20.459939    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:20.469447    9133 logs.go:276] 0 containers: []
	W0419 12:40:20.469461    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:20.469508    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:20.480236    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:20.480253    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:20.480257    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:20.503137    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:20.503147    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:20.514532    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:20.514547    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:20.534350    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:20.534363    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:20.548806    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:20.548818    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:20.560293    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:20.560304    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:20.594138    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:20.594151    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:20.607708    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:20.607718    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:20.620809    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:20.620821    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:20.634580    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:20.634592    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:20.645890    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:20.645903    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:20.666648    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:20.666658    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:20.677943    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:20.677958    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:20.713435    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:20.713442    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:20.724161    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:20.724173    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:20.739253    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:20.739264    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:20.750708    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:20.750717    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:23.257099    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:28.259568    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:28.259684    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:28.271523    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:28.271591    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:28.282534    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:28.282601    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:28.293492    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:28.293559    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:28.304412    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:28.304476    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:28.315864    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:28.315930    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:28.326593    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:28.326657    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:28.336731    9133 logs.go:276] 0 containers: []
	W0419 12:40:28.336741    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:28.336786    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:28.347468    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:28.347487    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:28.347493    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:28.361372    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:28.361382    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:28.382357    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:28.382369    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:28.394567    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:28.394578    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:28.410210    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:28.410221    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:28.448894    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:28.448917    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:28.491245    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:28.491257    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:28.504782    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:28.504798    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:28.519376    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:28.519392    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:28.533016    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:28.533034    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:28.549556    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:28.549577    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:28.573445    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:28.573467    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:28.586968    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:28.586983    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:28.591863    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:28.591874    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:28.607374    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:28.607387    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:28.620794    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:28.620810    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:28.634261    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:28.634273    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:31.160749    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:36.161689    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:36.162138    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:36.198269    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:36.198411    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:36.219375    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:36.219470    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:36.234920    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:36.234990    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:36.247017    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:36.247092    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:36.257459    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:36.257528    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:36.277926    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:36.278005    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:36.291153    9133 logs.go:276] 0 containers: []
	W0419 12:40:36.291165    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:36.291226    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:36.301838    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:36.301857    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:36.301862    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:36.313846    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:36.313859    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:36.337339    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:36.337348    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:36.351963    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:36.351975    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:36.363553    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:36.363566    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:36.380460    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:36.380471    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:36.393841    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:36.393851    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:36.405854    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:36.405864    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:36.417447    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:36.417457    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:36.431413    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:36.431422    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:36.450968    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:36.450977    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:36.465247    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:36.465257    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:36.481448    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:36.481461    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:36.493355    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:36.493368    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:36.527699    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:36.527708    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:36.531764    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:36.531775    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:36.566197    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:36.566211    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:39.080062    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:38.066210    9295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/config.json ...
	I0419 12:40:38.066927    9295 machine.go:94] provisionDockerMachine start ...
	I0419 12:40:38.067019    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.067342    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.067354    9295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 12:40:38.145023    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 12:40:38.145055    9295 buildroot.go:166] provisioning hostname "stopped-upgrade-860000"
	I0419 12:40:38.145135    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.145382    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.145395    9295 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-860000 && echo "stopped-upgrade-860000" | sudo tee /etc/hostname
	I0419 12:40:38.218738    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-860000
	
	I0419 12:40:38.218814    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.218987    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.219000    9295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-860000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-860000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-860000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 12:40:38.282803    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 12:40:38.282818    9295 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18669-6895/.minikube CaCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18669-6895/.minikube}
	I0419 12:40:38.282827    9295 buildroot.go:174] setting up certificates
	I0419 12:40:38.282838    9295 provision.go:84] configureAuth start
	I0419 12:40:38.282843    9295 provision.go:143] copyHostCerts
	I0419 12:40:38.282929    9295 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem, removing ...
	I0419 12:40:38.282937    9295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem
	I0419 12:40:38.283046    9295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem (1078 bytes)
	I0419 12:40:38.283252    9295 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem, removing ...
	I0419 12:40:38.283257    9295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem
	I0419 12:40:38.283311    9295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem (1123 bytes)
	I0419 12:40:38.283428    9295 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem, removing ...
	I0419 12:40:38.283432    9295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem
	I0419 12:40:38.283482    9295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem (1679 bytes)
	I0419 12:40:38.283573    9295 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-860000 san=[127.0.0.1 localhost minikube stopped-upgrade-860000]
	I0419 12:40:38.352784    9295 provision.go:177] copyRemoteCerts
	I0419 12:40:38.352826    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 12:40:38.352834    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:40:38.384349    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 12:40:38.391321    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0419 12:40:38.398508    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 12:40:38.405794    9295 provision.go:87] duration metric: took 122.949792ms to configureAuth
	I0419 12:40:38.405803    9295 buildroot.go:189] setting minikube options for container-runtime
	I0419 12:40:38.405929    9295 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:40:38.405964    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.406060    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.406065    9295 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 12:40:38.463375    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 12:40:38.463382    9295 buildroot.go:70] root file system type: tmpfs
	I0419 12:40:38.463437    9295 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 12:40:38.463477    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.463621    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.463657    9295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 12:40:38.528240    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 12:40:38.528293    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.528410    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.528422    9295 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 12:40:38.866061    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 12:40:38.866074    9295 machine.go:97] duration metric: took 799.147041ms to provisionDockerMachine
	I0419 12:40:38.866080    9295 start.go:293] postStartSetup for "stopped-upgrade-860000" (driver="qemu2")
	I0419 12:40:38.866086    9295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 12:40:38.866161    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 12:40:38.866171    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:40:38.897832    9295 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 12:40:38.899424    9295 info.go:137] Remote host: Buildroot 2021.02.12
	I0419 12:40:38.899434    9295 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18669-6895/.minikube/addons for local assets ...
	I0419 12:40:38.899517    9295 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18669-6895/.minikube/files for local assets ...
	I0419 12:40:38.899634    9295 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem -> 73042.pem in /etc/ssl/certs
	I0419 12:40:38.899764    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 12:40:38.902506    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem --> /etc/ssl/certs/73042.pem (1708 bytes)
	I0419 12:40:38.909837    9295 start.go:296] duration metric: took 43.753167ms for postStartSetup
	I0419 12:40:38.909851    9295 fix.go:56] duration metric: took 21.089818042s for fixHost
	I0419 12:40:38.909886    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.909990    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.909995    9295 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0419 12:40:38.967841    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713555639.037568046
	
	I0419 12:40:38.967849    9295 fix.go:216] guest clock: 1713555639.037568046
	I0419 12:40:38.967853    9295 fix.go:229] Guest: 2024-04-19 12:40:39.037568046 -0700 PDT Remote: 2024-04-19 12:40:38.909853 -0700 PDT m=+21.204413251 (delta=127.715046ms)
	I0419 12:40:38.967863    9295 fix.go:200] guest clock delta is within tolerance: 127.715046ms
	I0419 12:40:38.967865    9295 start.go:83] releasing machines lock for "stopped-upgrade-860000", held for 21.1478425s
	I0419 12:40:38.967916    9295 ssh_runner.go:195] Run: cat /version.json
	I0419 12:40:38.967922    9295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 12:40:38.967923    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:40:38.967941    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	W0419 12:40:38.968505    9295 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51412: connect: connection refused
	I0419 12:40:38.968528    9295 retry.go:31] will retry after 131.837075ms: dial tcp [::1]:51412: connect: connection refused
	W0419 12:40:39.136535    9295 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0419 12:40:39.136599    9295 ssh_runner.go:195] Run: systemctl --version
	I0419 12:40:39.138858    9295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 12:40:39.141895    9295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 12:40:39.141929    9295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0419 12:40:39.145818    9295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0419 12:40:39.158679    9295 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 12:40:39.158693    9295 start.go:494] detecting cgroup driver to use...
	I0419 12:40:39.158788    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 12:40:39.166818    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0419 12:40:39.170151    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 12:40:39.173105    9295 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 12:40:39.173143    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 12:40:39.176262    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 12:40:39.179251    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 12:40:39.182301    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 12:40:39.187797    9295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 12:40:39.191364    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 12:40:39.195533    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 12:40:39.200678    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 12:40:39.203705    9295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 12:40:39.206778    9295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 12:40:39.210055    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:39.267025    9295 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 12:40:39.277674    9295 start.go:494] detecting cgroup driver to use...
	I0419 12:40:39.277757    9295 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 12:40:39.282753    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 12:40:39.287905    9295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 12:40:39.297608    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 12:40:39.302445    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 12:40:39.307325    9295 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 12:40:39.350222    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 12:40:39.355535    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 12:40:39.360873    9295 ssh_runner.go:195] Run: which cri-dockerd
	I0419 12:40:39.362088    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 12:40:39.364719    9295 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 12:40:39.369628    9295 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 12:40:39.433534    9295 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 12:40:39.502418    9295 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 12:40:39.502469    9295 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 12:40:39.507845    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:39.568854    9295 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 12:40:40.699756    9295 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.130899667s)
	I0419 12:40:40.699824    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 12:40:40.704742    9295 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0419 12:40:40.711268    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 12:40:40.716160    9295 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 12:40:40.768755    9295 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 12:40:40.828148    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:40.888822    9295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 12:40:40.894848    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 12:40:40.899833    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:40.970195    9295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 12:40:41.009689    9295 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 12:40:41.009772    9295 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 12:40:41.011874    9295 start.go:562] Will wait 60s for crictl version
	I0419 12:40:41.011932    9295 ssh_runner.go:195] Run: which crictl
	I0419 12:40:41.013686    9295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 12:40:41.028410    9295 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0419 12:40:41.028490    9295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 12:40:41.045199    9295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 12:40:41.068587    9295 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0419 12:40:41.068705    9295 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0419 12:40:41.070017    9295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 12:40:41.073799    9295 kubeadm.go:877] updating cluster {Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0419 12:40:41.073849    9295 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0419 12:40:41.073886    9295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 12:40:41.084515    9295 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 12:40:41.084529    9295 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0419 12:40:41.084571    9295 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 12:40:41.087401    9295 ssh_runner.go:195] Run: which lz4
	I0419 12:40:41.088766    9295 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0419 12:40:41.089848    9295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 12:40:41.089857    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0419 12:40:41.838317    9295 docker.go:649] duration metric: took 749.598291ms to copy over tarball
	I0419 12:40:41.838379    9295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 12:40:44.082333    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:44.082700    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:43.000758    9295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162380833s)
	I0419 12:40:43.000775    9295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 12:40:43.016069    9295 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 12:40:43.018944    9295 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0419 12:40:43.023904    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:43.086289    9295 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 12:40:44.770321    9295 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.684043292s)
	I0419 12:40:44.770418    9295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 12:40:44.786884    9295 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 12:40:44.786894    9295 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0419 12:40:44.786901    9295 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0419 12:40:44.794884    9295 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:44.794936    9295 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0419 12:40:44.795017    9295 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:44.795123    9295 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:44.795158    9295 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:44.795232    9295 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:44.795672    9295 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:44.795854    9295 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:44.803677    9295 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:44.803866    9295 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:44.803976    9295 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:44.804060    9295 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:44.804091    9295 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:44.804299    9295 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:44.804531    9295 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:44.804530    9295 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0419 12:40:45.214298    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:45.225369    9295 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0419 12:40:45.225398    9295 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:45.225448    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:45.235457    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0419 12:40:45.246341    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:45.256524    9295 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0419 12:40:45.256546    9295 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:45.256589    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:45.257822    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:45.270489    9295 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0419 12:40:45.270508    9295 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:45.270556    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:45.270588    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0419 12:40:45.280541    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0419 12:40:45.299338    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0419 12:40:45.300801    9295 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0419 12:40:45.300891    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:45.309307    9295 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0419 12:40:45.309327    9295 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:45.309378    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:45.319181    9295 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0419 12:40:45.319204    9295 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:45.319252    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:45.319269    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0419 12:40:45.328521    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:45.329624    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0419 12:40:45.330820    9295 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0419 12:40:45.341830    9295 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0419 12:40:45.341850    9295 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:45.341830    9295 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0419 12:40:45.341889    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0419 12:40:45.341908    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:45.343723    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0419 12:40:45.370828    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0419 12:40:45.370877    9295 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0419 12:40:45.370900    9295 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0419 12:40:45.370952    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0419 12:40:45.389797    9295 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0419 12:40:45.389812    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0419 12:40:45.401123    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0419 12:40:45.401264    9295 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0419 12:40:45.435611    9295 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0419 12:40:45.435649    9295 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0419 12:40:45.435670    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0419 12:40:45.442524    9295 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0419 12:40:45.442534    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0419 12:40:45.468003    9295 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0419 12:40:45.611242    9295 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0419 12:40:45.611337    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:45.621895    9295 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0419 12:40:45.621919    9295 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:45.621973    9295 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:45.636474    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0419 12:40:45.636617    9295 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0419 12:40:45.638080    9295 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0419 12:40:45.638097    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0419 12:40:45.662778    9295 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0419 12:40:45.662797    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0419 12:40:45.905524    9295 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0419 12:40:45.905569    9295 cache_images.go:92] duration metric: took 1.118681792s to LoadCachedImages
	W0419 12:40:45.905609    9295 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0419 12:40:45.905615    9295 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0419 12:40:45.905666    9295 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-860000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 12:40:45.905721    9295 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 12:40:45.919566    9295 cni.go:84] Creating CNI manager for ""
	I0419 12:40:45.919578    9295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:40:45.919582    9295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 12:40:45.919593    9295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-860000 NodeName:stopped-upgrade-860000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 12:40:45.919658    9295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-860000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 12:40:45.919711    9295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0419 12:40:45.922868    9295 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 12:40:45.922894    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 12:40:45.925939    9295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0419 12:40:45.930976    9295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 12:40:45.936017    9295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0419 12:40:45.940912    9295 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0419 12:40:45.942069    9295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 12:40:45.945932    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:46.010942    9295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:40:46.017399    9295 certs.go:68] Setting up /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000 for IP: 10.0.2.15
	I0419 12:40:46.017407    9295 certs.go:194] generating shared ca certs ...
	I0419 12:40:46.017416    9295 certs.go:226] acquiring lock for ca certs: {Name:mke38b98dd5558382d381a0a6e0e324ad9664707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.017581    9295 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.key
	I0419 12:40:46.017629    9295 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.key
	I0419 12:40:46.017635    9295 certs.go:256] generating profile certs ...
	I0419 12:40:46.017710    9295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.key
	I0419 12:40:46.017729    9295 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7
	I0419 12:40:46.017741    9295 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0419 12:40:46.136552    9295 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7 ...
	I0419 12:40:46.136568    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7: {Name:mk0761eb88abc89e7c785f10ca01a4f153b316ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.136890    9295 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7 ...
	I0419 12:40:46.136895    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7: {Name:mkbf53f0ffca4dce5ad5fa220496f7f4a08a3405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.137025    9295 certs.go:381] copying /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt
	I0419 12:40:46.137175    9295 certs.go:385] copying /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key
	I0419 12:40:46.137335    9295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/proxy-client.key
	I0419 12:40:46.137466    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304.pem (1338 bytes)
	W0419 12:40:46.137500    9295 certs.go:480] ignoring /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304_empty.pem, impossibly tiny 0 bytes
	I0419 12:40:46.137506    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 12:40:46.137526    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem (1078 bytes)
	I0419 12:40:46.137544    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem (1123 bytes)
	I0419 12:40:46.137562    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem (1679 bytes)
	I0419 12:40:46.137603    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem (1708 bytes)
	I0419 12:40:46.137949    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 12:40:46.145289    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0419 12:40:46.151698    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 12:40:46.158734    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0419 12:40:46.166024    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0419 12:40:46.173607    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 12:40:46.179859    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 12:40:46.186618    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 12:40:46.194010    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 12:40:46.200351    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304.pem --> /usr/share/ca-certificates/7304.pem (1338 bytes)
	I0419 12:40:46.206587    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem --> /usr/share/ca-certificates/73042.pem (1708 bytes)
	I0419 12:40:46.213775    9295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 12:40:46.219015    9295 ssh_runner.go:195] Run: openssl version
	I0419 12:40:46.220878    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73042.pem && ln -fs /usr/share/ca-certificates/73042.pem /etc/ssl/certs/73042.pem"
	I0419 12:40:46.223747    9295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73042.pem
	I0419 12:40:46.225056    9295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:24 /usr/share/ca-certificates/73042.pem
	I0419 12:40:46.225076    9295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73042.pem
	I0419 12:40:46.226670    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73042.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 12:40:46.229699    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 12:40:46.232421    9295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:40:46.233695    9295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:40:46.233717    9295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:40:46.235486    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 12:40:46.238694    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7304.pem && ln -fs /usr/share/ca-certificates/7304.pem /etc/ssl/certs/7304.pem"
	I0419 12:40:46.241935    9295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7304.pem
	I0419 12:40:46.243330    9295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:24 /usr/share/ca-certificates/7304.pem
	I0419 12:40:46.243353    9295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7304.pem
	I0419 12:40:46.245149    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7304.pem /etc/ssl/certs/51391683.0"
	I0419 12:40:46.248038    9295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 12:40:46.249397    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 12:40:46.251444    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 12:40:46.253216    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 12:40:46.254939    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 12:40:46.256756    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 12:40:46.258384    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 12:40:46.260124    9295 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:40:46.260185    9295 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 12:40:46.270339    9295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0419 12:40:46.273385    9295 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0419 12:40:46.273391    9295 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0419 12:40:46.273394    9295 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0419 12:40:46.273412    9295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0419 12:40:46.276219    9295 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:40:46.276533    9295 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-860000" does not appear in /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:40:46.276636    9295 kubeconfig.go:62] /Users/jenkins/minikube-integration/18669-6895/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-860000" cluster setting kubeconfig missing "stopped-upgrade-860000" context setting]
	I0419 12:40:46.276844    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.277278    9295 kapi.go:59] client config for stopped-upgrade-860000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104737980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:40:46.277591    9295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0419 12:40:46.280328    9295 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-860000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0419 12:40:46.280338    9295 kubeadm.go:1154] stopping kube-system containers ...
	I0419 12:40:46.280376    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 12:40:46.293886    9295 docker.go:483] Stopping containers: [5ce705ff4dfe 8129fb0f9c59 1a1ee76d9718 986cd162b7e6 b92c1db2efbd 2ba5461e0d60 21d19188b6ac 87f1b14237b7]
	I0419 12:40:46.293953    9295 ssh_runner.go:195] Run: docker stop 5ce705ff4dfe 8129fb0f9c59 1a1ee76d9718 986cd162b7e6 b92c1db2efbd 2ba5461e0d60 21d19188b6ac 87f1b14237b7
	I0419 12:40:46.309914    9295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0419 12:40:46.315353    9295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:40:46.318440    9295 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 12:40:46.318446    9295 kubeadm.go:156] found existing configuration files:
	
	I0419 12:40:46.318464    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf
	I0419 12:40:46.321397    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 12:40:46.321420    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:40:46.324025    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf
	I0419 12:40:46.326616    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 12:40:46.326635    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:40:46.329550    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf
	I0419 12:40:46.332008    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 12:40:46.332030    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:40:46.334870    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf
	I0419 12:40:46.338231    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 12:40:46.338283    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:40:46.341537    9295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:40:46.344369    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:46.368894    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.203497    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.329239    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.351967    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.376575    9295 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:40:47.376655    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:40:44.125240    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:44.125349    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:44.147655    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:44.147731    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:44.164379    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:44.164450    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:44.176974    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:44.177082    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:44.188720    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:44.188789    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:44.205896    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:44.205971    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:44.216964    9133 logs.go:276] 0 containers: []
	W0419 12:40:44.216979    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:44.217041    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:44.227986    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:44.228007    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:44.228013    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:44.232472    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:44.232481    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:44.248692    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:44.248705    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:44.261144    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:44.261158    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:44.281739    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:44.281750    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:44.299395    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:44.299407    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:44.311486    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:44.311499    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:44.327641    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:44.327651    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:44.346727    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:44.346741    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:44.358185    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:44.358203    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:44.381221    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:44.381233    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:44.397268    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:44.397280    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:44.409764    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:44.409774    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:44.422956    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:44.422971    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:44.459586    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:44.459597    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:44.495445    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:44.495457    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:44.513391    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:44.513403    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:47.027922    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:47.878833    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:40:48.378723    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:40:48.388314    9295 api_server.go:72] duration metric: took 1.011758417s to wait for apiserver process to appear ...
	I0419 12:40:48.388333    9295 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:40:48.388343    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:52.029096    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:52.029199    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:52.041628    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:52.041698    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:52.052032    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:52.052105    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:52.062065    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:52.062134    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:52.072613    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:52.072682    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:52.086886    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:52.086945    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:52.096997    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:52.097054    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:40:52.107972    9133 logs.go:276] 0 containers: []
	W0419 12:40:52.107983    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:40:52.108044    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:40:52.118428    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:40:52.118445    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:40:52.118450    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:40:52.129932    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:40:52.129944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:40:52.141424    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:40:52.141438    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:40:52.159626    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:40:52.159642    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:40:52.173631    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:40:52.173641    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:40:52.185547    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:40:52.185558    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:40:52.200721    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:40:52.200731    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:40:52.217801    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:40:52.217816    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:40:52.222254    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:40:52.222261    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:40:52.242027    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:40:52.242037    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:40:52.253363    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:40:52.253373    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:40:52.265549    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:40:52.265562    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:40:52.278028    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:40:52.278042    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:40:52.289641    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:40:52.289651    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:40:52.324725    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:40:52.324733    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:40:52.338949    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:40:52.338963    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:40:52.361740    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:40:52.361755    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:40:53.390362    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:53.390382    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:54.900473    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:58.390807    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:58.390871    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:59.902746    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:59.902949    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:40:59.930379    9133 logs.go:276] 2 containers: [f5aecdcb0822 1d424cfff08b]
	I0419 12:40:59.930502    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:40:59.947632    9133 logs.go:276] 2 containers: [fb6d3894a088 e6f848e18f0b]
	I0419 12:40:59.947722    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:40:59.964874    9133 logs.go:276] 1 containers: [a0f1cbcebc85]
	I0419 12:40:59.964946    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:40:59.975565    9133 logs.go:276] 2 containers: [a1e362963bf9 543f6d6ab63d]
	I0419 12:40:59.975651    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:40:59.986138    9133 logs.go:276] 1 containers: [9d720c8fd051]
	I0419 12:40:59.986200    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:40:59.997896    9133 logs.go:276] 2 containers: [d6c0bb6cf1c5 ffe6fa954ae5]
	I0419 12:40:59.997966    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:41:00.008548    9133 logs.go:276] 0 containers: []
	W0419 12:41:00.008559    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:41:00.008619    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:41:00.019008    9133 logs.go:276] 2 containers: [cebcd3c86943 5591acc62b12]
	I0419 12:41:00.019031    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:41:00.019037    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:41:00.023310    9133 logs.go:123] Gathering logs for kube-scheduler [a1e362963bf9] ...
	I0419 12:41:00.023318    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1e362963bf9"
	I0419 12:41:00.035840    9133 logs.go:123] Gathering logs for kube-controller-manager [ffe6fa954ae5] ...
	I0419 12:41:00.035854    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe6fa954ae5"
	I0419 12:41:00.047940    9133 logs.go:123] Gathering logs for storage-provisioner [5591acc62b12] ...
	I0419 12:41:00.047951    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5591acc62b12"
	I0419 12:41:00.058738    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:41:00.058748    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:41:00.083705    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:41:00.083714    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:41:00.119026    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:41:00.119034    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:41:00.154292    9133 logs.go:123] Gathering logs for etcd [e6f848e18f0b] ...
	I0419 12:41:00.154305    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f848e18f0b"
	I0419 12:41:00.169443    9133 logs.go:123] Gathering logs for kube-apiserver [f5aecdcb0822] ...
	I0419 12:41:00.169454    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aecdcb0822"
	I0419 12:41:00.183801    9133 logs.go:123] Gathering logs for kube-apiserver [1d424cfff08b] ...
	I0419 12:41:00.183814    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d424cfff08b"
	I0419 12:41:00.203660    9133 logs.go:123] Gathering logs for coredns [a0f1cbcebc85] ...
	I0419 12:41:00.203670    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0f1cbcebc85"
	I0419 12:41:00.214704    9133 logs.go:123] Gathering logs for storage-provisioner [cebcd3c86943] ...
	I0419 12:41:00.214715    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cebcd3c86943"
	I0419 12:41:00.226090    9133 logs.go:123] Gathering logs for etcd [fb6d3894a088] ...
	I0419 12:41:00.226104    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb6d3894a088"
	I0419 12:41:00.240195    9133 logs.go:123] Gathering logs for kube-scheduler [543f6d6ab63d] ...
	I0419 12:41:00.240206    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 543f6d6ab63d"
	I0419 12:41:00.255399    9133 logs.go:123] Gathering logs for kube-proxy [9d720c8fd051] ...
	I0419 12:41:00.255410    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d720c8fd051"
	I0419 12:41:00.273436    9133 logs.go:123] Gathering logs for kube-controller-manager [d6c0bb6cf1c5] ...
	I0419 12:41:00.273446    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6c0bb6cf1c5"
	I0419 12:41:00.290175    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:41:00.290184    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:41:02.803963    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:03.391348    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:03.391398    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:07.806085    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:07.806166    9133 kubeadm.go:591] duration metric: took 4m4.127296s to restartPrimaryControlPlane
	W0419 12:41:07.806241    9133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0419 12:41:07.806273    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0419 12:41:08.797372    9133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 12:41:08.802584    9133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:41:08.805611    9133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:41:08.808754    9133 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 12:41:08.808760    9133 kubeadm.go:156] found existing configuration files:
	
	I0419 12:41:08.808787    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/admin.conf
	I0419 12:41:08.811477    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 12:41:08.811497    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:41:08.814334    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/kubelet.conf
	I0419 12:41:08.817614    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 12:41:08.817641    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:41:08.820531    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/controller-manager.conf
	I0419 12:41:08.822983    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 12:41:08.823003    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:41:08.826027    9133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/scheduler.conf
	I0419 12:41:08.829287    9133 kubeadm.go:162] "https://control-plane.minikube.internal:51218" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51218 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 12:41:08.829306    9133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:41:08.832160    9133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 12:41:08.848396    9133 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0419 12:41:08.848438    9133 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 12:41:08.894962    9133 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 12:41:08.895025    9133 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 12:41:08.895074    9133 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 12:41:08.943422    9133 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 12:41:08.948581    9133 out.go:204]   - Generating certificates and keys ...
	I0419 12:41:08.948617    9133 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 12:41:08.948655    9133 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 12:41:08.948701    9133 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 12:41:08.948730    9133 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0419 12:41:08.948758    9133 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0419 12:41:08.948785    9133 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0419 12:41:08.948829    9133 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0419 12:41:08.948857    9133 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0419 12:41:08.948899    9133 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 12:41:08.948934    9133 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 12:41:08.948953    9133 kubeadm.go:309] [certs] Using the existing "sa" key
	I0419 12:41:08.948986    9133 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 12:41:09.023906    9133 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 12:41:09.271693    9133 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 12:41:09.368974    9133 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 12:41:09.460621    9133 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 12:41:09.489843    9133 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 12:41:09.490315    9133 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 12:41:09.490419    9133 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 12:41:09.569870    9133 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 12:41:08.391980    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:08.392003    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:09.573318    9133 out.go:204]   - Booting up control plane ...
	I0419 12:41:09.573366    9133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 12:41:09.573411    9133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 12:41:09.573449    9133 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 12:41:09.573493    9133 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 12:41:09.573814    9133 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0419 12:41:14.076265    9133 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502160 seconds
	I0419 12:41:14.076341    9133 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 12:41:14.079814    9133 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 12:41:14.594912    9133 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 12:41:14.595176    9133 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-311000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 12:41:15.098516    9133 kubeadm.go:309] [bootstrap-token] Using token: sl9qiv.yyect9jtigof15l8
	I0419 12:41:15.104821    9133 out.go:204]   - Configuring RBAC rules ...
	I0419 12:41:15.104881    9133 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 12:41:15.104929    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 12:41:15.108496    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 12:41:15.109413    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 12:41:15.110846    9133 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 12:41:15.112090    9133 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 12:41:15.115162    9133 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 12:41:15.281257    9133 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 12:41:15.501757    9133 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 12:41:15.502174    9133 kubeadm.go:309] 
	I0419 12:41:15.502211    9133 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 12:41:15.502215    9133 kubeadm.go:309] 
	I0419 12:41:15.502249    9133 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 12:41:15.502264    9133 kubeadm.go:309] 
	I0419 12:41:15.502278    9133 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 12:41:15.502315    9133 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 12:41:15.502340    9133 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 12:41:15.502344    9133 kubeadm.go:309] 
	I0419 12:41:15.502378    9133 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 12:41:15.502381    9133 kubeadm.go:309] 
	I0419 12:41:15.502404    9133 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 12:41:15.502407    9133 kubeadm.go:309] 
	I0419 12:41:15.502438    9133 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 12:41:15.502475    9133 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 12:41:15.502511    9133 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 12:41:15.502514    9133 kubeadm.go:309] 
	I0419 12:41:15.502564    9133 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 12:41:15.502603    9133 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 12:41:15.502606    9133 kubeadm.go:309] 
	I0419 12:41:15.502660    9133 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token sl9qiv.yyect9jtigof15l8 \
	I0419 12:41:15.502715    9133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 \
	I0419 12:41:15.502728    9133 kubeadm.go:309] 	--control-plane 
	I0419 12:41:15.502730    9133 kubeadm.go:309] 
	I0419 12:41:15.502778    9133 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 12:41:15.502783    9133 kubeadm.go:309] 
	I0419 12:41:15.502826    9133 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token sl9qiv.yyect9jtigof15l8 \
	I0419 12:41:15.502884    9133 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 
	I0419 12:41:15.502947    9133 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 12:41:15.502954    9133 cni.go:84] Creating CNI manager for ""
	I0419 12:41:15.502963    9133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:41:15.507463    9133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 12:41:15.515406    9133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 12:41:15.518587    9133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 12:41:15.523421    9133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 12:41:15.523486    9133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 12:41:15.523545    9133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-311000 minikube.k8s.io/updated_at=2024_04_19T12_41_15_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=running-upgrade-311000 minikube.k8s.io/primary=true
	I0419 12:41:15.570814    9133 ops.go:34] apiserver oom_adj: -16
	I0419 12:41:15.570818    9133 kubeadm.go:1107] duration metric: took 47.366458ms to wait for elevateKubeSystemPrivileges
	W0419 12:41:15.570843    9133 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 12:41:15.570847    9133 kubeadm.go:393] duration metric: took 4m11.905434625s to StartCluster
	I0419 12:41:15.570857    9133 settings.go:142] acquiring lock: {Name:mkc28392d1c267200804e17c319a937f73acc262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:41:15.571023    9133 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:41:15.571402    9133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:41:15.571625    9133 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:41:15.576432    9133 out.go:177] * Verifying Kubernetes components...
	I0419 12:41:15.571670    9133 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 12:41:15.571809    9133 config.go:182] Loaded profile config "running-upgrade-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:41:15.584293    9133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:41:15.584304    9133 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-311000"
	I0419 12:41:15.584312    9133 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-311000"
	I0419 12:41:15.584315    9133 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-311000"
	W0419 12:41:15.584319    9133 addons.go:243] addon storage-provisioner should already be in state true
	I0419 12:41:15.584321    9133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-311000"
	I0419 12:41:15.584337    9133 host.go:66] Checking if "running-upgrade-311000" exists ...
	I0419 12:41:15.585469    9133 kapi.go:59] client config for running-upgrade-311000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/running-upgrade-311000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063bf980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:41:15.586222    9133 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-311000"
	W0419 12:41:15.586227    9133 addons.go:243] addon default-storageclass should already be in state true
	I0419 12:41:15.586235    9133 host.go:66] Checking if "running-upgrade-311000" exists ...
	I0419 12:41:15.590395    9133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:41:13.392687    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:13.392850    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:15.594501    9133 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:41:15.594507    9133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 12:41:15.594514    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:41:15.595256    9133 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 12:41:15.595262    9133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 12:41:15.595266    9133 sshutil.go:53] new ssh client: &{IP:localhost Port:51186 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/running-upgrade-311000/id_rsa Username:docker}
	I0419 12:41:15.661405    9133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:41:15.666574    9133 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:41:15.666618    9133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:41:15.671248    9133 api_server.go:72] duration metric: took 99.614333ms to wait for apiserver process to appear ...
	I0419 12:41:15.671256    9133 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:41:15.671262    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:15.676787    9133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 12:41:15.677387    9133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:41:18.394157    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:18.394202    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:20.673324    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:20.673349    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:23.395608    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:23.395678    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:25.673814    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:25.673865    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:28.396026    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:28.396067    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:30.674300    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:30.674338    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:33.397819    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:33.397877    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:35.675146    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:35.675187    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:38.400180    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:38.400227    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:40.675988    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:40.676027    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:45.677043    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:45.677080    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0419 12:41:46.026163    9133 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0419 12:41:46.030167    9133 out.go:177] * Enabled addons: storage-provisioner
	I0419 12:41:43.401897    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:43.401944    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:46.038105    9133 addons.go:505] duration metric: took 30.4671385s for enable addons: enabled=[storage-provisioner]
	I0419 12:41:48.402248    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:48.402600    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:41:48.442838    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:41:48.442964    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:41:48.462318    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:41:48.462418    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:41:48.476303    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:41:48.476376    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:41:48.488233    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:41:48.488305    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:41:48.499738    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:41:48.499797    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:41:48.510871    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:41:48.510940    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:41:48.521766    9295 logs.go:276] 0 containers: []
	W0419 12:41:48.521776    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:41:48.521831    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:41:48.532899    9295 logs.go:276] 0 containers: []
	W0419 12:41:48.532912    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:41:48.532926    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:41:48.532939    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:41:48.548648    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:41:48.548658    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:41:48.560575    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:41:48.560586    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:41:48.599051    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:41:48.599062    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:41:48.627125    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:41:48.627139    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:41:48.644316    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:41:48.644328    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:41:48.659738    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:41:48.659750    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:41:48.663954    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:41:48.663963    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:41:48.679846    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:41:48.679859    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:41:48.693654    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:41:48.693668    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:41:48.710067    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:41:48.710078    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:41:48.722359    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:41:48.722370    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:41:48.826784    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:41:48.826797    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:41:48.840764    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:41:48.840775    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:41:48.851929    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:41:48.851939    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:41:51.380285    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:50.678418    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:50.678518    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:56.381994    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:56.382415    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:41:56.419869    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:41:56.420053    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:41:56.444517    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:41:56.444608    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:41:56.458258    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:41:56.458333    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:41:56.470132    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:41:56.470201    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:41:56.484974    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:41:56.485044    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:41:56.495382    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:41:56.495453    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:41:56.505801    9295 logs.go:276] 0 containers: []
	W0419 12:41:56.505814    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:41:56.505865    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:41:56.516275    9295 logs.go:276] 0 containers: []
	W0419 12:41:56.516285    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:41:56.516291    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:41:56.516297    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:41:56.552574    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:41:56.552586    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:41:56.566533    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:41:56.566544    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:41:56.603915    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:41:56.603924    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:41:56.621479    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:41:56.621490    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:41:56.635737    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:41:56.635747    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:41:56.661996    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:41:56.662015    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:41:56.687002    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:41:56.687015    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:41:56.700881    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:41:56.700893    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:41:56.714628    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:41:56.714638    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:41:56.731515    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:41:56.731524    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:41:56.743252    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:41:56.743266    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:41:56.747899    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:41:56.747907    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:41:56.759410    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:41:56.759422    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:41:56.774014    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:41:56.774029    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:41:55.680479    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:55.680521    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:59.290766    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:00.682729    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:00.682770    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:04.292029    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:04.292272    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:04.320899    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:04.321012    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:04.338222    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:04.338322    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:04.351801    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:04.351878    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:04.363935    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:04.364000    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:04.375615    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:04.375675    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:04.386397    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:04.386462    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:04.396349    9295 logs.go:276] 0 containers: []
	W0419 12:42:04.396362    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:04.396432    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:04.408350    9295 logs.go:276] 0 containers: []
	W0419 12:42:04.408361    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:04.408368    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:04.408372    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:04.422822    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:04.422835    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:04.465797    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:04.465808    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:04.479992    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:04.480004    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:04.491846    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:04.491857    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:04.506914    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:04.506924    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:04.522270    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:04.522281    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:04.533501    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:04.533513    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:04.547801    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:04.547810    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:04.574770    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:04.574782    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:04.612520    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:04.612532    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:04.642138    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:04.642161    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:04.653640    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:04.653652    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:04.671653    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:04.671665    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:04.676166    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:04.676177    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:07.194636    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:05.684932    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:05.684972    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:12.197169    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:12.197344    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:12.213703    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:12.213784    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:12.229644    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:12.229710    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:12.240357    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:12.240423    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:12.250541    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:12.250600    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:12.260532    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:12.260600    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:12.270468    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:12.270527    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:12.280224    9295 logs.go:276] 0 containers: []
	W0419 12:42:12.280236    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:12.280295    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:12.294788    9295 logs.go:276] 0 containers: []
	W0419 12:42:12.294800    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:12.294808    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:12.294814    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:12.334717    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:12.334731    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:12.360071    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:12.360081    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:12.371517    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:12.371527    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:12.396506    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:12.396514    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:12.407933    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:12.407943    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:12.445127    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:12.445136    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:12.459027    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:12.459037    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:12.472902    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:12.472912    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:12.490504    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:12.490515    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:12.504887    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:12.504898    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:12.508981    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:12.508987    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:12.526994    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:12.527004    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:12.540162    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:12.540174    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:12.555087    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:12.555101    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:10.687128    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:10.687176    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:15.072167    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:15.689424    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:15.689561    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:15.706099    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:15.706175    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:15.717438    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:15.717503    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:15.727819    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:15.727880    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:15.738139    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:15.738207    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:15.748038    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:15.748105    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:15.761763    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:15.761833    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:15.771522    9133 logs.go:276] 0 containers: []
	W0419 12:42:15.771533    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:15.771586    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:15.783369    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:15.783382    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:15.783387    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:15.795021    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:15.795032    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:15.813686    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:15.813696    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:15.826017    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:15.826027    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:15.830331    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:15.830337    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:15.844292    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:15.844305    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:15.855315    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:15.855325    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:15.866483    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:15.866495    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:15.891226    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:15.891236    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:15.915448    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:15.915456    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:15.928098    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:15.928111    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:15.962664    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:15.962675    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:15.998435    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:15.998449    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:18.514368    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:20.073944    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:20.074079    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:20.088701    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:20.088772    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:20.101155    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:20.101227    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:20.117595    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:20.117661    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:20.127961    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:20.128027    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:20.138211    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:20.138268    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:20.148581    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:20.148644    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:20.159451    9295 logs.go:276] 0 containers: []
	W0419 12:42:20.159463    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:20.159517    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:20.174062    9295 logs.go:276] 0 containers: []
	W0419 12:42:20.174073    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:20.174081    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:20.174088    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:20.191170    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:20.191183    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:20.228812    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:20.228822    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:20.244322    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:20.244335    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:20.248640    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:20.248649    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:20.275401    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:20.275414    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:20.290454    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:20.290465    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:20.326916    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:20.326928    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:20.339042    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:20.339056    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:20.352667    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:20.352679    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:20.369599    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:20.369609    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:20.383978    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:20.383989    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:20.409594    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:20.409603    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:20.421040    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:20.421051    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:20.438371    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:20.438386    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:23.516654    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:23.516788    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:23.527462    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:23.527537    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:23.538127    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:23.538189    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:23.548950    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:23.549018    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:23.559086    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:23.559153    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:23.569533    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:23.569600    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:23.579787    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:23.579843    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:23.589887    9133 logs.go:276] 0 containers: []
	W0419 12:42:23.589898    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:23.589955    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:23.600199    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:23.600213    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:23.600218    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:23.604983    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:23.604993    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:23.619110    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:23.619121    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:23.643578    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:23.643585    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:23.654820    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:23.654830    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:23.667013    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:23.667023    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:23.686751    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:23.686765    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:23.698413    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:23.698422    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:23.715947    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:23.715958    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:23.751445    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:23.751456    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:23.797329    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:23.797340    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:23.811393    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:23.811405    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:23.823682    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:23.823693    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:22.954382    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:26.337172    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:27.956628    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:27.956718    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:27.969356    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:27.969425    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:27.983989    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:27.984061    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:27.994418    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:27.994485    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:28.004966    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:28.005045    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:28.018251    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:28.018313    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:28.029209    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:28.029277    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:28.038915    9295 logs.go:276] 0 containers: []
	W0419 12:42:28.038928    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:28.038982    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:28.049494    9295 logs.go:276] 0 containers: []
	W0419 12:42:28.049505    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:28.049512    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:28.049519    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:28.069822    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:28.069832    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:28.084253    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:28.084266    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:28.088374    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:28.088380    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:28.124599    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:28.124609    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:28.149584    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:28.149597    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:28.163719    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:28.163729    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:28.175689    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:28.175700    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:28.187270    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:28.187282    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:28.201584    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:28.201595    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:28.216095    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:28.216104    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:28.232006    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:28.232021    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:28.243874    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:28.243889    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:28.267182    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:28.267191    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:28.303906    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:28.303917    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:30.820558    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:31.339625    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:31.340020    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:31.382521    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:31.382657    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:31.403810    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:31.403923    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:31.419553    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:31.419629    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:31.432294    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:31.432362    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:31.443376    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:31.443447    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:31.454721    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:31.454787    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:31.464849    9133 logs.go:276] 0 containers: []
	W0419 12:42:31.464861    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:31.464918    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:31.475296    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:31.475313    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:31.475318    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:31.480356    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:31.480365    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:31.494673    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:31.494684    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:31.506579    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:31.506588    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:31.517876    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:31.517888    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:31.542420    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:31.542429    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:31.576015    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:31.576025    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:31.615806    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:31.615820    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:31.629720    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:31.629732    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:31.643924    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:31.643936    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:31.659699    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:31.659711    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:31.677240    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:31.677250    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:31.689148    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:31.689159    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:35.823139    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:35.823375    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:35.846600    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:35.846690    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:35.860942    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:35.861013    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:35.873344    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:35.873406    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:35.884376    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:35.884446    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:35.895319    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:35.895386    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:35.906038    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:35.906108    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:35.916560    9295 logs.go:276] 0 containers: []
	W0419 12:42:35.916572    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:35.916630    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:35.926668    9295 logs.go:276] 0 containers: []
	W0419 12:42:35.926679    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:35.926688    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:35.926693    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:35.951863    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:35.951876    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:35.967338    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:35.967349    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:35.982702    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:35.982713    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:35.997077    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:35.997087    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:36.015997    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:36.016008    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:36.028819    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:36.028831    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:36.032923    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:36.032932    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:36.067246    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:36.067256    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:36.081354    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:36.081365    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:36.100888    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:36.100901    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:36.118943    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:36.118953    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:36.136375    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:36.136385    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:36.173750    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:36.173758    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:36.185165    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:36.185177    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:34.202051    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:38.710831    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:39.204205    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:39.204394    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:39.223553    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:39.223639    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:39.237246    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:39.237308    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:39.249561    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:39.249631    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:39.260119    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:39.260175    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:39.270395    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:39.270463    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:39.280852    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:39.280924    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:39.291126    9133 logs.go:276] 0 containers: []
	W0419 12:42:39.291136    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:39.291186    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:39.302005    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:39.302018    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:39.302026    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:39.313435    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:39.313445    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:39.338074    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:39.338081    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:39.349158    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:39.349170    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:39.384052    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:39.384061    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:39.420310    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:39.420323    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:39.440208    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:39.440220    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:39.452503    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:39.452513    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:39.470550    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:39.470563    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:39.482511    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:39.482522    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:39.487168    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:39.487175    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:39.501143    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:39.501155    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:39.516726    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:39.516735    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:42.031111    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:43.713389    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:43.713619    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:43.733774    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:43.733860    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:43.748293    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:43.748362    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:43.762442    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:43.762511    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:43.772964    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:43.773032    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:43.783621    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:43.783683    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:43.794882    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:43.794947    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:43.804779    9295 logs.go:276] 0 containers: []
	W0419 12:42:43.804789    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:43.804840    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:43.814689    9295 logs.go:276] 0 containers: []
	W0419 12:42:43.814703    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:43.814710    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:43.814718    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:43.838964    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:43.838974    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:43.857667    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:43.857678    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:43.876029    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:43.876040    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:43.880337    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:43.880344    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:43.918990    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:43.919001    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:43.933727    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:43.933739    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:43.949176    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:43.949186    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:43.960916    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:43.960928    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:44.000443    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:44.000454    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:44.025163    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:44.025173    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:44.036395    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:44.036409    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:44.047489    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:44.047500    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:44.065256    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:44.065267    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:44.079759    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:44.079770    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:46.599140    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:47.033255    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:47.033442    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:47.053760    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:47.053847    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:47.068199    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:47.068267    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:47.080216    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:47.080286    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:47.090418    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:47.090479    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:47.100510    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:47.100580    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:47.112203    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:47.112269    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:47.122432    9133 logs.go:276] 0 containers: []
	W0419 12:42:47.122449    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:47.122510    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:47.132582    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:47.132599    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:47.132603    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:47.150335    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:47.150347    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:47.173645    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:47.173653    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:47.185525    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:47.185536    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:47.189727    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:47.189737    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:47.224503    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:47.224513    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:47.238660    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:47.238670    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:47.256738    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:47.256748    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:47.268436    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:47.268446    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:47.284140    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:47.284150    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:47.295702    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:47.295712    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:47.329001    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:47.329010    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:47.339906    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:47.339916    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:51.601727    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:51.601862    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:51.613277    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:51.613357    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:51.623351    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:51.623409    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:51.633766    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:51.633832    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:51.649574    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:51.649646    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:51.662197    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:51.662263    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:51.672710    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:51.672787    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:51.682982    9295 logs.go:276] 0 containers: []
	W0419 12:42:51.682993    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:51.683049    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:51.697559    9295 logs.go:276] 0 containers: []
	W0419 12:42:51.697569    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:51.697577    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:51.697583    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:51.709310    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:51.709320    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:51.746110    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:51.746120    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:51.771392    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:51.771403    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:51.783577    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:51.783589    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:51.808490    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:51.808501    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:51.812532    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:51.812543    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:51.827773    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:51.827785    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:51.842702    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:51.842711    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:51.857385    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:51.857398    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:51.875303    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:51.875315    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:51.909156    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:51.909167    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:51.946545    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:51.946556    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:51.963899    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:51.963914    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:51.980703    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:51.980719    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:49.853737    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:54.496765    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:54.856333    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:54.856651    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:54.890424    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:42:54.890558    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:54.915676    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:42:54.915760    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:54.930159    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:42:54.930233    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:54.949954    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:42:54.950014    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:54.960745    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:42:54.960812    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:54.975408    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:42:54.975475    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:54.989970    9133 logs.go:276] 0 containers: []
	W0419 12:42:54.989981    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:54.990038    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:55.000609    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:42:55.000626    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:55.000632    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:55.035458    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:55.035469    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:55.040404    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:42:55.040411    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:42:55.054592    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:42:55.054606    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:42:55.069187    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:42:55.069198    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:42:55.080638    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:55.080647    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:55.104015    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:42:55.104026    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:55.115444    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:55.115460    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:55.149460    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:42:55.149471    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:42:55.163549    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:42:55.163559    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:42:55.175718    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:42:55.175728    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:42:55.191519    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:42:55.191530    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:42:55.208971    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:42:55.208981    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:42:57.722542    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:59.499041    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:59.499229    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:59.515997    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:59.516084    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:59.529851    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:59.529921    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:59.541313    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:59.541377    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:59.551364    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:59.551434    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:59.562170    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:59.562242    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:59.572940    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:59.573003    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:59.583420    9295 logs.go:276] 0 containers: []
	W0419 12:42:59.583432    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:59.583488    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:59.593181    9295 logs.go:276] 0 containers: []
	W0419 12:42:59.593193    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:59.593202    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:59.593208    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:59.612673    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:59.612687    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:59.626176    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:59.626189    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:59.641025    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:59.641036    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:59.658786    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:59.658800    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:59.683709    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:59.683718    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:59.695513    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:59.695523    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:59.730442    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:59.730453    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:59.741659    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:59.741668    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:59.752741    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:59.752753    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:59.773313    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:59.773325    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:59.812478    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:59.812488    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:59.837816    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:59.837826    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:59.852883    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:59.852892    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:59.873597    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:59.873610    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:02.379783    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:02.724405    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:02.724642    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:02.753652    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:02.753792    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:02.772429    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:02.772538    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:02.787866    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:02.787957    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:02.800125    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:02.800193    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:02.811243    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:02.811334    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:02.822122    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:02.822201    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:02.832188    9133 logs.go:276] 0 containers: []
	W0419 12:43:02.832199    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:02.832263    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:02.842862    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:02.842876    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:02.842884    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:02.878333    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:02.878344    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:02.892796    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:02.892807    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:02.904601    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:02.904611    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:02.915799    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:02.915809    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:02.928613    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:02.928625    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:02.945463    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:02.945473    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:02.969325    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:02.969335    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:03.002346    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:03.002356    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:03.006689    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:03.006697    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:03.020352    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:03.020362    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:03.032997    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:03.033007    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:03.047649    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:03.047659    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:07.382085    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:07.382256    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:07.405011    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:07.405094    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:07.421144    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:07.421221    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:07.432289    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:07.432360    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:07.443027    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:07.443097    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:07.453337    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:07.453402    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:07.465165    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:07.465228    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:07.475146    9295 logs.go:276] 0 containers: []
	W0419 12:43:07.475157    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:07.475212    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:07.485531    9295 logs.go:276] 0 containers: []
	W0419 12:43:07.485543    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:07.485552    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:07.485559    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:07.520394    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:07.520407    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:07.534324    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:07.534338    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:07.549064    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:07.549078    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:07.570265    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:07.570274    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:07.587894    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:07.587904    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:07.610800    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:07.610808    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:07.636833    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:07.636846    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:07.656416    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:07.656430    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:07.668244    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:07.668254    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:07.705377    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:07.705388    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:07.709195    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:07.709205    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:07.722898    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:07.722908    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:07.734136    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:07.734148    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:05.561846    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:07.748732    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:07.748742    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:10.263289    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:10.563983    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:10.564076    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:10.574651    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:10.574716    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:10.585586    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:10.585650    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:10.597647    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:10.597713    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:10.609959    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:10.610022    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:10.620216    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:10.620276    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:10.630275    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:10.630343    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:10.640233    9133 logs.go:276] 0 containers: []
	W0419 12:43:10.640248    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:10.640299    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:10.650650    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:10.650664    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:10.650668    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:10.667836    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:10.667846    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:10.672692    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:10.672701    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:10.707106    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:10.707116    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:10.721047    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:10.721059    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:10.734814    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:10.734825    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:10.745928    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:10.745941    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:10.760441    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:10.760452    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:10.793681    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:10.793691    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:10.805140    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:10.805150    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:10.816767    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:10.816777    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:10.828391    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:10.828401    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:10.852652    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:10.852661    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:13.366082    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:15.264582    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:15.264897    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:15.300823    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:15.300955    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:15.325150    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:15.325243    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:15.339295    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:15.339360    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:15.354068    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:15.354145    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:15.365083    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:15.365149    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:15.376036    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:15.376099    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:15.387112    9295 logs.go:276] 0 containers: []
	W0419 12:43:15.387125    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:15.387179    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:15.397354    9295 logs.go:276] 0 containers: []
	W0419 12:43:15.397367    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:15.397375    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:15.397382    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:15.434873    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:15.434884    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:15.449039    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:15.449053    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:15.460689    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:15.460701    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:15.482419    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:15.482430    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:15.494996    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:15.495008    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:15.499717    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:15.499725    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:15.524710    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:15.524722    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:15.539695    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:15.539709    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:15.579009    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:15.579019    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:15.594059    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:15.594069    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:15.607659    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:15.607669    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:15.630819    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:15.630827    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:15.648080    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:15.648091    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:15.663320    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:15.663330    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:18.368368    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:18.368532    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:18.387039    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:18.387116    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:18.399912    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:18.399981    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:18.411352    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:18.411423    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:18.422073    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:18.422139    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:18.432506    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:18.432571    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:18.446045    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:18.446112    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:18.456360    9133 logs.go:276] 0 containers: []
	W0419 12:43:18.456371    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:18.456432    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:18.466824    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:18.466839    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:18.466845    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:18.478156    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:18.478167    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:18.497239    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:18.497249    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:18.514566    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:18.514580    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:18.526077    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:18.526088    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:18.561999    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:18.562010    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:18.576500    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:18.576511    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:18.589781    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:18.589792    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:18.601358    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:18.601371    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:18.613428    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:18.613438    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:18.629828    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:18.629838    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:18.654052    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:18.654060    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:18.687409    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:18.687415    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:18.179688    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:21.194035    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:23.182032    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:23.182237    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:23.208916    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:23.209049    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:23.226067    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:23.226157    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:23.239315    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:23.239393    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:23.251115    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:23.251190    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:23.261271    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:23.261335    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:23.271791    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:23.271860    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:23.281897    9295 logs.go:276] 0 containers: []
	W0419 12:43:23.281907    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:23.281965    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:23.292014    9295 logs.go:276] 0 containers: []
	W0419 12:43:23.292026    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:23.292033    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:23.292038    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:23.305927    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:23.305941    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:23.322042    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:23.322054    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:23.336024    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:23.336034    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:23.347547    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:23.347559    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:23.363517    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:23.363528    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:23.381411    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:23.381420    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:23.404527    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:23.404546    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:23.408805    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:23.408814    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:23.432307    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:23.432318    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:23.458657    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:23.458667    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:23.476783    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:23.476797    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:23.516626    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:23.516636    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:23.553667    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:23.553683    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:23.565551    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:23.565563    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:26.082937    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:26.196371    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:26.196677    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:26.231910    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:26.232070    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:26.251731    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:26.251808    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:26.265890    9133 logs.go:276] 2 containers: [c0251d75bd38 d044b3c4661d]
	I0419 12:43:26.265961    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:26.277989    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:26.278055    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:26.288681    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:26.288741    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:26.299435    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:26.299501    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:26.309404    9133 logs.go:276] 0 containers: []
	W0419 12:43:26.309415    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:26.309479    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:26.320027    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:26.320041    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:26.320046    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:26.331987    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:26.331999    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:26.336883    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:26.336889    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:26.350677    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:26.350691    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:26.362197    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:26.362206    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:26.373845    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:26.373855    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:26.388494    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:26.388504    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:26.405708    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:26.405721    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:26.417298    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:26.417307    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:26.441920    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:26.441936    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:26.476509    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:26.476516    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:26.512135    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:26.512145    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:26.529661    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:26.529672    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:29.044372    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:31.083982    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:31.084206    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:31.103842    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:31.103934    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:31.118852    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:31.118933    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:31.130840    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:31.130906    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:31.141646    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:31.141720    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:31.151703    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:31.151760    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:31.162399    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:31.162464    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:31.172769    9295 logs.go:276] 0 containers: []
	W0419 12:43:31.172782    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:31.172835    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:31.183013    9295 logs.go:276] 0 containers: []
	W0419 12:43:31.183023    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:31.183030    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:31.183036    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:31.196955    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:31.196966    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:31.222283    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:31.222293    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:31.245967    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:31.245976    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:31.250447    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:31.250456    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:31.269841    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:31.269854    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:31.284779    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:31.284791    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:31.296443    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:31.296457    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:31.330411    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:31.330425    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:31.355148    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:31.355161    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:31.372006    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:31.372022    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:31.387090    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:31.387101    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:31.399269    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:31.399283    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:31.413977    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:31.413987    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:31.453402    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:31.453411    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:34.046659    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:34.046788    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:34.061241    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:34.061318    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:34.073173    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:34.073241    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:34.084482    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:34.084551    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:34.094781    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:34.094853    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:34.105026    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:34.105102    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:33.969469    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:34.117060    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:34.117123    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:34.127342    9133 logs.go:276] 0 containers: []
	W0419 12:43:34.127353    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:34.127400    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:34.143040    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:34.143058    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:34.143066    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:34.154501    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:34.154520    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:34.171409    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:34.171424    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:34.183177    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:34.183186    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:34.194525    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:34.194535    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:34.227410    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:34.227418    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:34.262843    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:34.262857    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:34.278796    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:34.278807    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:34.290467    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:34.290479    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:34.305378    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:34.305388    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:34.317863    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:34.317872    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:34.329458    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:34.329468    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:34.333798    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:34.333804    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:34.347546    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:34.347556    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:34.358776    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:34.358789    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:36.886039    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:38.971759    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:38.972180    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:39.012167    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:39.012309    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:39.033327    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:39.033442    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:39.056709    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:39.056786    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:39.068270    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:39.068339    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:39.081923    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:39.081990    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:39.092687    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:39.092756    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:39.109048    9295 logs.go:276] 0 containers: []
	W0419 12:43:39.109058    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:39.109116    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:39.119884    9295 logs.go:276] 0 containers: []
	W0419 12:43:39.119896    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:39.119906    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:39.119941    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:39.145094    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:39.145105    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:39.159581    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:39.159591    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:39.198246    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:39.198255    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:39.203227    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:39.203239    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:39.218027    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:39.218036    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:39.236656    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:39.236668    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:39.251286    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:39.251295    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:39.264189    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:39.264200    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:39.278027    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:39.278043    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:39.289269    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:39.289280    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:39.304081    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:39.304092    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:39.319070    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:39.319082    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:39.331113    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:39.331124    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:39.355552    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:39.355562    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:41.892050    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:41.888687    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:41.889108    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:41.923886    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:41.924018    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:41.943933    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:41.944026    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:41.960344    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:41.960446    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:41.975015    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:41.975079    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:41.986089    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:41.986162    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:41.997392    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:41.997463    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:42.007706    9133 logs.go:276] 0 containers: []
	W0419 12:43:42.007720    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:42.007777    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:42.022490    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:42.022511    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:42.022516    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:42.039091    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:42.039105    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:42.073454    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:42.073468    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:42.091976    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:42.091986    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:42.104528    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:42.104541    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:42.109061    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:42.109071    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:42.124079    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:42.124089    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:42.137987    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:42.137997    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:42.149072    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:42.149083    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:42.161844    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:42.161855    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:42.195603    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:42.195610    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:42.207590    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:42.207600    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:42.219686    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:42.219696    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:42.235320    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:42.235333    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:42.260186    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:42.260193    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:46.893848    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:46.894242    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:46.928601    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:46.928727    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:46.948080    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:46.948178    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:46.962739    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:46.962815    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:46.975392    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:46.975463    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:46.990143    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:46.990217    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:47.002991    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:47.003063    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:47.014000    9295 logs.go:276] 0 containers: []
	W0419 12:43:47.014011    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:47.014075    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:47.024280    9295 logs.go:276] 0 containers: []
	W0419 12:43:47.024289    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:47.024298    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:47.024304    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:47.061275    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:47.061285    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:47.065428    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:47.065434    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:47.090085    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:47.090094    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:47.105081    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:47.105093    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:47.117084    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:47.117096    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:47.137460    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:47.137472    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:47.149445    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:47.149456    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:47.166427    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:47.166438    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:47.182396    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:47.182406    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:47.217716    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:47.217729    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:47.231409    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:47.231422    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:47.248937    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:47.248946    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:47.264055    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:47.264064    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:47.278601    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:47.278610    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:44.778442    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:49.804211    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:49.781015    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:49.781519    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:49.820156    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:49.820282    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:49.839922    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:49.840009    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:49.855173    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:49.855244    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:49.868068    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:49.868132    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:49.878705    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:49.878769    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:49.889467    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:49.889525    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:49.900540    9133 logs.go:276] 0 containers: []
	W0419 12:43:49.900553    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:49.900606    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:49.911274    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:49.911288    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:49.911293    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:49.923897    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:49.923908    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:49.938656    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:49.938666    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:49.950140    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:49.950149    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:49.984392    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:49.984402    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:49.999315    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:49.999329    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:43:50.013618    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:50.013629    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:50.025249    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:50.025260    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:50.037520    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:50.037532    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:50.051533    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:50.051544    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:50.063235    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:50.063245    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:50.087404    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:50.087416    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:50.092268    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:50.092276    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:50.127355    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:50.127366    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:50.139157    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:50.139166    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:52.658313    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:54.806628    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:54.806860    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:54.836164    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:54.836282    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:54.856237    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:54.856309    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:54.868574    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:54.868641    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:54.879858    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:54.879930    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:54.890557    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:54.890623    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:54.901086    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:54.901151    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:54.912620    9295 logs.go:276] 0 containers: []
	W0419 12:43:54.912634    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:54.912696    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:54.923156    9295 logs.go:276] 0 containers: []
	W0419 12:43:54.923168    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:54.923177    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:54.923182    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:54.937420    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:54.937434    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:54.952048    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:54.952061    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:54.969180    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:54.969191    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:54.993640    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:54.993648    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:54.998089    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:54.998096    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:55.023107    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:55.023119    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:55.038163    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:55.038177    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:55.053345    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:55.053358    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:55.064936    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:55.064951    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:55.078743    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:55.078753    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:55.089833    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:55.089845    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:55.116465    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:55.116475    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:55.127990    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:55.128002    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:55.166770    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:55.166780    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:57.701706    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:57.658541    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:57.658661    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:57.670754    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:43:57.670830    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:57.681971    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:43:57.682043    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:57.693078    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:43:57.693143    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:57.703299    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:43:57.703358    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:57.714002    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:43:57.714058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:57.724291    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:43:57.724350    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:57.734154    9133 logs.go:276] 0 containers: []
	W0419 12:43:57.734164    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:57.734210    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:57.744372    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:43:57.744389    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:43:57.744394    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:43:57.755566    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:43:57.755575    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:43:57.767769    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:43:57.767778    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:57.782155    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:43:57.782164    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:43:57.796489    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:43:57.796502    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:43:57.807679    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:43:57.807689    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:43:57.823358    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:43:57.823369    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:43:57.848200    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:57.848214    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:57.882420    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:43:57.882431    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:43:57.897581    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:43:57.897592    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:43:57.909434    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:43:57.909447    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:43:57.922152    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:57.922162    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:57.946077    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:57.946086    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:57.979704    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:57.979715    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:57.984402    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:43:57.984409    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:02.703958    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:02.704304    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:02.734469    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:02.734574    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:00.500237    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:02.751827    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:02.751912    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:02.765648    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:02.765709    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:02.777425    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:02.777497    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:02.788891    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:02.788962    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:02.799124    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:02.799201    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:02.809504    9295 logs.go:276] 0 containers: []
	W0419 12:44:02.809517    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:02.809573    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:02.820129    9295 logs.go:276] 0 containers: []
	W0419 12:44:02.820140    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:02.820150    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:02.820156    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:02.855536    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:02.855549    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:02.872581    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:02.872591    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:02.901194    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:02.901203    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:02.913178    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:02.913189    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:02.924542    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:02.924551    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:02.928735    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:02.928745    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:02.943651    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:02.943667    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:02.958298    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:02.958309    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:02.973220    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:02.973230    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:03.012920    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:03.012930    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:03.030123    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:03.030133    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:03.041514    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:03.041525    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:03.055190    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:03.055200    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:03.070359    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:03.070369    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:05.595377    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:05.502588    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:05.502750    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:05.514611    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:05.514681    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:05.527247    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:05.527315    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:05.545451    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:05.545519    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:05.555688    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:05.555754    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:05.569718    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:05.569781    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:05.589583    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:05.589645    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:05.599575    9133 logs.go:276] 0 containers: []
	W0419 12:44:05.599589    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:05.599639    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:05.609813    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:05.609830    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:05.609835    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:05.643714    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:05.643725    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:05.658505    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:05.658516    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:05.670530    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:05.670545    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:05.682239    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:05.682249    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:05.715451    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:05.715459    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:05.726512    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:05.726521    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:05.740119    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:05.740130    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:05.751815    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:05.751826    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:05.763066    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:05.763077    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:05.781160    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:05.781171    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:05.804654    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:05.804662    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:05.815989    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:05.815998    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:05.820420    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:05.820428    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:05.835241    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:05.835252    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:08.349376    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:10.597516    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:10.597767    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:10.623106    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:10.623210    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:10.639576    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:10.639659    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:10.664430    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:10.664498    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:10.681442    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:10.681522    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:10.694777    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:10.694840    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:10.705653    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:10.705716    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:10.715439    9295 logs.go:276] 0 containers: []
	W0419 12:44:10.715451    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:10.715509    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:10.725527    9295 logs.go:276] 0 containers: []
	W0419 12:44:10.725537    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:10.725545    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:10.725551    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:10.730144    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:10.730150    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:10.744726    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:10.744735    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:10.781708    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:10.781734    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:10.796697    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:10.796713    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:10.818663    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:10.818674    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:10.829782    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:10.829794    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:10.846992    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:10.847002    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:10.865065    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:10.865076    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:10.902830    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:10.902841    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:10.919869    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:10.919880    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:10.944959    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:10.944971    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:10.959743    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:10.959756    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:10.974446    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:10.974458    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:10.992370    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:10.992379    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:13.351894    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:13.352251    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:13.385784    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:13.385908    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:13.403445    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:13.403526    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:13.417705    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:13.417783    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:13.429777    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:13.429845    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:13.441050    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:13.441119    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:13.452995    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:13.453058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:13.463301    9133 logs.go:276] 0 containers: []
	W0419 12:44:13.463314    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:13.463369    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:13.474009    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:13.474024    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:13.474029    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:13.478632    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:13.478640    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:13.514973    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:13.514983    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:13.530667    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:13.530679    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:13.554907    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:13.554916    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:13.566723    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:13.566734    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:13.603640    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:13.603652    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:13.618254    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:13.618264    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:13.630144    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:13.630154    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:13.649107    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:13.649118    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:13.660759    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:13.660769    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:13.678089    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:13.678104    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:13.689983    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:13.689992    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:13.701818    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:13.701830    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:13.719946    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:13.719955    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:13.506512    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:16.233968    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:18.508673    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:18.508982    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:18.546129    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:18.546261    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:18.564345    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:18.564436    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:18.577674    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:18.577746    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:18.589837    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:18.589907    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:18.600759    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:18.600830    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:18.612468    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:18.612529    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:18.623171    9295 logs.go:276] 0 containers: []
	W0419 12:44:18.623183    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:18.623240    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:18.633218    9295 logs.go:276] 0 containers: []
	W0419 12:44:18.633228    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:18.633236    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:18.633240    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:18.648127    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:18.648138    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:18.660824    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:18.660835    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:18.675243    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:18.675253    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:18.689485    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:18.689495    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:18.727321    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:18.727338    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:18.740946    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:18.740957    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:18.756148    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:18.756157    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:18.780246    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:18.780261    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:18.799326    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:18.799335    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:18.821381    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:18.821388    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:18.825288    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:18.825297    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:18.837296    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:18.837308    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:18.861773    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:18.861786    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:18.876969    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:18.876984    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:21.415513    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:21.236568    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:21.236828    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:21.263880    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:21.263998    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:21.282226    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:21.282309    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:21.297529    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:21.297602    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:21.308877    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:21.308942    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:21.319161    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:21.319223    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:21.332717    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:21.332781    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:21.342861    9133 logs.go:276] 0 containers: []
	W0419 12:44:21.342873    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:21.342924    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:21.353390    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:21.353407    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:21.353413    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:21.388592    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:21.388603    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:21.393137    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:21.393146    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:21.404935    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:21.404944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:21.416414    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:21.416425    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:21.450737    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:21.450752    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:21.462305    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:21.462318    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:21.474934    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:21.474945    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:21.500231    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:21.500240    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:21.514454    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:21.514465    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:21.526300    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:21.526312    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:21.538255    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:21.538266    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:21.555997    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:21.556007    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:21.571007    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:21.571017    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:21.583329    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:21.583339    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:24.099720    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:26.417613    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:26.417811    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:26.433227    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:26.433305    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:26.444212    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:26.444287    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:26.454358    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:26.454427    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:26.464767    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:26.464837    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:26.475300    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:26.475364    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:26.490513    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:26.490587    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:26.501210    9295 logs.go:276] 0 containers: []
	W0419 12:44:26.501221    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:26.501275    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:26.511287    9295 logs.go:276] 0 containers: []
	W0419 12:44:26.511297    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:26.511303    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:26.511308    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:26.525319    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:26.525335    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:26.549006    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:26.549014    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:26.560645    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:26.560659    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:26.599447    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:26.599457    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:26.616325    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:26.616337    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:26.631472    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:26.631481    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:26.642881    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:26.642891    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:26.669732    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:26.669743    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:26.681423    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:26.681434    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:26.698440    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:26.698450    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:26.703003    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:26.703011    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:26.740346    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:26.740357    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:26.755641    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:26.755651    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:26.770472    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:26.770482    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:29.102001    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:29.102167    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:29.287172    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:29.119314    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:29.119395    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:29.132564    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:29.132632    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:29.143495    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:29.143560    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:29.160107    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:29.160171    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:29.170116    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:29.170179    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:29.184998    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:29.185058    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:29.194995    9133 logs.go:276] 0 containers: []
	W0419 12:44:29.195009    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:29.195062    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:29.205390    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:29.205406    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:29.205411    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:29.239686    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:29.239696    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:29.258108    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:29.258121    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:29.270153    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:29.270165    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:29.295517    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:29.295527    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:29.309346    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:29.309359    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:29.320805    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:29.320819    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:29.332983    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:29.332994    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:29.349985    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:29.349996    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:29.363691    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:29.363704    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:29.368041    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:29.368050    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:29.403273    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:29.403286    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:29.426301    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:29.426311    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:29.438558    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:29.438569    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:29.450120    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:29.450133    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:31.967291    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:34.289297    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:34.289504    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:34.314458    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:34.314560    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:34.330863    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:34.330947    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:34.343968    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:34.344034    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:34.355803    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:34.355869    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:34.366731    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:34.366794    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:34.384775    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:34.384838    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:34.399411    9295 logs.go:276] 0 containers: []
	W0419 12:44:34.399425    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:34.399483    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:34.409781    9295 logs.go:276] 0 containers: []
	W0419 12:44:34.409793    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:34.409801    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:34.409806    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:34.425306    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:34.425317    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:34.439759    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:34.439770    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:34.450953    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:34.450965    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:34.468238    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:34.468247    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:34.490988    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:34.490999    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:34.504128    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:34.504139    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:34.508251    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:34.508257    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:34.532139    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:34.532154    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:34.568897    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:34.568905    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:34.602695    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:34.602705    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:34.620439    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:34.620453    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:34.632323    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:34.632338    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:34.648185    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:34.648195    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:34.663469    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:34.663481    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:37.180117    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:36.969763    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:36.969894    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:36.984893    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:36.984992    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:36.997218    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:36.997288    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:37.012126    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:37.012194    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:37.023411    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:37.023480    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:37.034008    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:37.034073    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:37.044718    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:37.044809    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:37.054638    9133 logs.go:276] 0 containers: []
	W0419 12:44:37.054649    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:37.054702    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:37.065610    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:37.065627    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:37.065633    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:37.080932    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:37.080944    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:37.094912    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:37.094922    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:37.107289    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:37.107302    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:37.124642    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:37.124653    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:37.135980    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:37.135991    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:37.168862    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:37.168874    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:37.203735    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:37.203747    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:37.215336    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:37.215347    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:37.227368    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:37.227378    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:37.231945    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:37.231954    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:37.243632    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:37.243645    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:37.267731    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:37.267742    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:37.280598    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:37.280609    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:37.297981    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:37.297991    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:42.182264    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:42.182501    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:42.205152    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:42.205265    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:42.221685    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:42.221758    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:42.234015    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:42.234083    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:42.244815    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:42.244882    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:42.254879    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:42.254942    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:42.265967    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:42.266030    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:42.284611    9295 logs.go:276] 0 containers: []
	W0419 12:44:42.284622    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:42.284677    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:42.295112    9295 logs.go:276] 0 containers: []
	W0419 12:44:42.295128    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:42.295136    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:42.295142    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:42.334056    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:42.334065    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:42.338225    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:42.338232    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:42.366268    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:42.366278    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:42.399725    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:42.399735    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:42.414495    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:42.414505    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:42.431781    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:42.431790    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:42.446132    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:42.446142    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:42.480636    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:42.480650    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:42.492166    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:42.492179    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:42.509217    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:42.509226    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:42.523034    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:42.523047    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:42.538864    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:42.538879    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:42.554004    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:42.554019    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:42.577872    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:42.577880    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:39.809871    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:45.091469    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:44.812296    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:44.812647    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:44.842287    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:44.842410    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:44.860954    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:44.861031    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:44.876685    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:44.876749    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:44.888357    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:44.888427    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:44.899190    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:44.899267    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:44.910853    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:44.910923    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:44.921718    9133 logs.go:276] 0 containers: []
	W0419 12:44:44.921730    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:44.921787    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:44.933601    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:44.933621    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:44.933626    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:44.966971    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:44.966979    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:44.985188    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:44.985200    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:45.008288    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:45.008297    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:45.019902    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:45.019913    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:45.044278    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:45.044285    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:45.055901    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:45.055912    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:45.060264    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:45.060271    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:45.095784    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:45.095794    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:45.114423    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:45.114436    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:45.128355    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:45.128366    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:45.139963    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:45.139973    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:45.151341    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:45.151353    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:45.164300    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:45.164311    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:45.175980    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:45.175994    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:47.689577    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:50.092411    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:50.092482    9295 kubeadm.go:591] duration metric: took 4m3.824481792s to restartPrimaryControlPlane
	W0419 12:44:50.092555    9295 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0419 12:44:50.092579    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0419 12:44:51.059240    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 12:44:51.064313    9295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:44:51.067085    9295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:44:51.069861    9295 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 12:44:51.069868    9295 kubeadm.go:156] found existing configuration files:
	
	I0419 12:44:51.069887    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf
	I0419 12:44:51.072475    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 12:44:51.072503    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:44:51.074814    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf
	I0419 12:44:51.077856    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 12:44:51.077880    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:44:51.081095    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf
	I0419 12:44:51.083494    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 12:44:51.083516    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:44:51.086246    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf
	I0419 12:44:51.089336    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 12:44:51.089358    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:44:51.091896    9295 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 12:44:51.109036    9295 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0419 12:44:51.109094    9295 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 12:44:51.158840    9295 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 12:44:51.158909    9295 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 12:44:51.158962    9295 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 12:44:51.210975    9295 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 12:44:51.214258    9295 out.go:204]   - Generating certificates and keys ...
	I0419 12:44:51.214292    9295 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 12:44:51.214339    9295 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 12:44:51.214436    9295 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 12:44:51.214500    9295 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0419 12:44:51.214535    9295 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0419 12:44:51.214567    9295 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0419 12:44:51.214595    9295 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0419 12:44:51.214680    9295 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0419 12:44:51.214752    9295 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 12:44:51.214845    9295 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 12:44:51.214884    9295 kubeadm.go:309] [certs] Using the existing "sa" key
	I0419 12:44:51.214915    9295 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 12:44:51.356534    9295 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 12:44:51.416182    9295 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 12:44:51.452681    9295 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 12:44:51.553146    9295 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 12:44:51.582227    9295 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 12:44:51.583372    9295 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 12:44:51.583395    9295 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 12:44:51.656274    9295 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 12:44:51.661698    9295 out.go:204]   - Booting up control plane ...
	I0419 12:44:51.661753    9295 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 12:44:51.661790    9295 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 12:44:51.661820    9295 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 12:44:51.661858    9295 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 12:44:51.661953    9295 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0419 12:44:52.690771    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:52.690865    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:52.702600    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:44:52.702674    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:52.714635    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:44:52.714712    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:52.729325    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:44:52.729400    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:52.742900    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:44:52.742990    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:52.755946    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:44:52.756019    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:52.767607    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:44:52.767673    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:52.781658    9133 logs.go:276] 0 containers: []
	W0419 12:44:52.781669    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:52.781725    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:52.793047    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:44:52.793065    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:52.793073    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:52.798438    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:44:52.798449    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:44:52.814344    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:44:52.814359    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:44:52.830461    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:44:52.830476    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:44:52.844916    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:44:52.844931    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:44:52.866122    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:44:52.866135    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:44:52.879332    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:44:52.879346    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:44:52.892295    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:44:52.892309    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:44:52.926609    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:44:52.926626    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:44:52.944443    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:52.944454    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:52.969938    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:52.969953    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:53.008096    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:53.008116    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:53.046221    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:44:53.046233    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:44:53.058544    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:44:53.058559    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:44:53.071423    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:44:53.071436    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:56.664279    9295 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.007730 seconds
	I0419 12:44:56.664453    9295 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 12:44:56.677965    9295 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 12:44:57.190534    9295 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 12:44:57.190663    9295 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-860000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 12:44:57.704246    9295 kubeadm.go:309] [bootstrap-token] Using token: pmip4s.5q42x0gk1u9qbqk8
	I0419 12:44:57.708466    9295 out.go:204]   - Configuring RBAC rules ...
	I0419 12:44:57.708595    9295 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 12:44:57.709267    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 12:44:57.715852    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 12:44:57.718027    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 12:44:57.719926    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 12:44:57.721647    9295 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 12:44:57.728492    9295 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 12:44:57.867805    9295 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 12:44:58.111755    9295 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 12:44:58.112342    9295 kubeadm.go:309] 
	I0419 12:44:58.112371    9295 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 12:44:58.112374    9295 kubeadm.go:309] 
	I0419 12:44:58.112422    9295 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 12:44:58.112429    9295 kubeadm.go:309] 
	I0419 12:44:58.112444    9295 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 12:44:58.112477    9295 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 12:44:58.112502    9295 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 12:44:58.112512    9295 kubeadm.go:309] 
	I0419 12:44:58.112541    9295 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 12:44:58.112545    9295 kubeadm.go:309] 
	I0419 12:44:58.112579    9295 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 12:44:58.112586    9295 kubeadm.go:309] 
	I0419 12:44:58.112614    9295 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 12:44:58.112654    9295 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 12:44:58.112692    9295 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 12:44:58.112697    9295 kubeadm.go:309] 
	I0419 12:44:58.112738    9295 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 12:44:58.112778    9295 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 12:44:58.112783    9295 kubeadm.go:309] 
	I0419 12:44:58.112821    9295 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pmip4s.5q42x0gk1u9qbqk8 \
	I0419 12:44:58.112879    9295 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 \
	I0419 12:44:58.112892    9295 kubeadm.go:309] 	--control-plane 
	I0419 12:44:58.112897    9295 kubeadm.go:309] 
	I0419 12:44:58.112936    9295 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 12:44:58.112943    9295 kubeadm.go:309] 
	I0419 12:44:58.112992    9295 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pmip4s.5q42x0gk1u9qbqk8 \
	I0419 12:44:58.113069    9295 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 
	I0419 12:44:58.113398    9295 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 12:44:58.113414    9295 cni.go:84] Creating CNI manager for ""
	I0419 12:44:58.113425    9295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:44:58.117130    9295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 12:44:58.124424    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 12:44:58.127404    9295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 12:44:58.132072    9295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 12:44:58.132116    9295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 12:44:58.132160    9295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-860000 minikube.k8s.io/updated_at=2024_04_19T12_44_58_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=stopped-upgrade-860000 minikube.k8s.io/primary=true
	I0419 12:44:58.135255    9295 ops.go:34] apiserver oom_adj: -16
	I0419 12:44:58.175214    9295 kubeadm.go:1107] duration metric: took 43.133042ms to wait for elevateKubeSystemPrivileges
	W0419 12:44:58.175232    9295 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 12:44:58.175235    9295 kubeadm.go:393] duration metric: took 4m11.920693666s to StartCluster
	I0419 12:44:58.175244    9295 settings.go:142] acquiring lock: {Name:mkc28392d1c267200804e17c319a937f73acc262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:44:58.175325    9295 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:44:58.175728    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:44:58.175924    9295 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:44:58.180299    9295 out.go:177] * Verifying Kubernetes components...
	I0419 12:44:58.175939    9295 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 12:44:58.176023    9295 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:44:58.188287    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:44:58.188289    9295 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-860000"
	I0419 12:44:58.188303    9295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-860000"
	I0419 12:44:58.188284    9295 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-860000"
	I0419 12:44:58.188318    9295 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-860000"
	W0419 12:44:58.188328    9295 addons.go:243] addon storage-provisioner should already be in state true
	I0419 12:44:58.188350    9295 host.go:66] Checking if "stopped-upgrade-860000" exists ...
	I0419 12:44:58.188753    9295 retry.go:31] will retry after 960.455837ms: connect: dial unix /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/monitor: connect: connection refused
	I0419 12:44:58.192199    9295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:44:55.586530    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:58.196318    9295 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:44:58.196325    9295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 12:44:58.196335    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:44:58.266667    9295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:44:58.272183    9295 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:44:58.272221    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:44:58.275756    9295 api_server.go:72] duration metric: took 99.824416ms to wait for apiserver process to appear ...
	I0419 12:44:58.275765    9295 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:44:58.275773    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:58.329092    9295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:44:59.152853    9295 kapi.go:59] client config for stopped-upgrade-860000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104737980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:44:59.153151    9295 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-860000"
	W0419 12:44:59.153160    9295 addons.go:243] addon default-storageclass should already be in state true
	I0419 12:44:59.153178    9295 host.go:66] Checking if "stopped-upgrade-860000" exists ...
	I0419 12:44:59.154253    9295 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 12:44:59.154263    9295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 12:44:59.154272    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:44:59.191633    9295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 12:45:00.588763    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:00.588929    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:45:00.600313    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:45:00.600377    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:45:00.610683    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:45:00.610753    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:45:00.621442    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:45:00.621501    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:45:00.631278    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:45:00.631337    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:45:00.641752    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:45:00.641808    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:45:00.652188    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:45:00.652245    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:45:00.662597    9133 logs.go:276] 0 containers: []
	W0419 12:45:00.662610    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:45:00.662666    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:45:00.673270    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:45:00.673289    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:45:00.673294    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:45:00.686065    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:45:00.686077    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:45:00.704107    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:45:00.704119    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:45:00.728392    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:45:00.728400    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:45:00.767108    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:45:00.767120    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:45:00.787305    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:45:00.787316    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:45:00.798926    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:45:00.798937    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:45:00.810515    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:45:00.810528    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:45:00.822535    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:45:00.822546    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:45:00.827075    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:45:00.827085    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:45:00.841494    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:45:00.841507    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:45:00.864859    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:45:00.864873    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:45:00.898560    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:45:00.898570    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:45:00.910443    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:45:00.910456    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:45:00.924985    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:45:00.924997    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:45:03.438588    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:03.277937    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:03.278017    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:08.440656    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:08.440807    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:45:08.463321    9133 logs.go:276] 1 containers: [8d5750441143]
	I0419 12:45:08.463395    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:45:08.474732    9133 logs.go:276] 1 containers: [12602e2098e4]
	I0419 12:45:08.474803    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:45:08.486162    9133 logs.go:276] 4 containers: [123208fd3974 7e8d92c948d3 c0251d75bd38 d044b3c4661d]
	I0419 12:45:08.486233    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:45:08.504268    9133 logs.go:276] 1 containers: [4027c73736e5]
	I0419 12:45:08.504336    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:45:08.514545    9133 logs.go:276] 1 containers: [1f708eacc69a]
	I0419 12:45:08.514610    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:45:08.525161    9133 logs.go:276] 1 containers: [39fcc6afd4b4]
	I0419 12:45:08.525237    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:45:08.535298    9133 logs.go:276] 0 containers: []
	W0419 12:45:08.535309    9133 logs.go:278] No container was found matching "kindnet"
	I0419 12:45:08.535360    9133 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:45:08.545754    9133 logs.go:276] 1 containers: [6464f53916cf]
	I0419 12:45:08.545775    9133 logs.go:123] Gathering logs for kube-apiserver [8d5750441143] ...
	I0419 12:45:08.545781    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d5750441143"
	I0419 12:45:08.560762    9133 logs.go:123] Gathering logs for coredns [123208fd3974] ...
	I0419 12:45:08.560773    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 123208fd3974"
	I0419 12:45:08.572428    9133 logs.go:123] Gathering logs for kube-scheduler [4027c73736e5] ...
	I0419 12:45:08.572439    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4027c73736e5"
	I0419 12:45:08.587166    9133 logs.go:123] Gathering logs for kubelet ...
	I0419 12:45:08.587177    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:45:08.620595    9133 logs.go:123] Gathering logs for coredns [7e8d92c948d3] ...
	I0419 12:45:08.620608    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e8d92c948d3"
	I0419 12:45:08.631882    9133 logs.go:123] Gathering logs for coredns [c0251d75bd38] ...
	I0419 12:45:08.631892    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0251d75bd38"
	I0419 12:45:08.644210    9133 logs.go:123] Gathering logs for kube-controller-manager [39fcc6afd4b4] ...
	I0419 12:45:08.644221    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39fcc6afd4b4"
	I0419 12:45:08.662118    9133 logs.go:123] Gathering logs for etcd [12602e2098e4] ...
	I0419 12:45:08.662129    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12602e2098e4"
	I0419 12:45:08.675940    9133 logs.go:123] Gathering logs for coredns [d044b3c4661d] ...
	I0419 12:45:08.675954    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d044b3c4661d"
	I0419 12:45:08.691253    9133 logs.go:123] Gathering logs for kube-proxy [1f708eacc69a] ...
	I0419 12:45:08.691265    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f708eacc69a"
	I0419 12:45:08.703674    9133 logs.go:123] Gathering logs for Docker ...
	I0419 12:45:08.703686    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:45:08.726078    9133 logs.go:123] Gathering logs for dmesg ...
	I0419 12:45:08.726085    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:45:08.730742    9133 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:45:08.730748    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:45:08.767535    9133 logs.go:123] Gathering logs for storage-provisioner [6464f53916cf] ...
	I0419 12:45:08.767554    9133 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6464f53916cf"
	I0419 12:45:08.783997    9133 logs.go:123] Gathering logs for container status ...
	I0419 12:45:08.784007    9133 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:45:08.278543    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:08.278568    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:11.297654    9133 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:16.299866    9133 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:16.302689    9133 out.go:177] 
	W0419 12:45:16.306695    9133 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0419 12:45:16.306706    9133 out.go:239] * 
	W0419 12:45:16.307332    9133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:45:16.320690    9133 out.go:177] 
	I0419 12:45:13.278953    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:13.278979    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:18.279462    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:18.279509    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:23.280260    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:23.280338    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:28.281127    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:28.281177    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0419 12:45:29.243496    9295 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0419 12:45:29.246449    9295 out.go:177] * Enabled addons: storage-provisioner
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-04-19 19:36:29 UTC, ends at Fri 2024-04-19 19:45:32 UTC. --
	Apr 19 19:45:17 running-upgrade-311000 dockerd[2836]: time="2024-04-19T19:45:17.113940474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 19 19:45:17 running-upgrade-311000 dockerd[2836]: time="2024-04-19T19:45:17.114005179Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7dffd66dd095a8a172c27c2f08e1bd55eafd280046fa9b13f04df46b4adee211 pid=18488 runtime=io.containerd.runc.v2
	Apr 19 19:45:17 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:17Z" level=error msg="ContainerStats resp: {0x400079c7c0 linux}"
	Apr 19 19:45:17 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:17Z" level=error msg="ContainerStats resp: {0x40007831c0 linux}"
	Apr 19 19:45:18 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:18Z" level=error msg="ContainerStats resp: {0x4000a10e80 linux}"
	Apr 19 19:45:18 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:18Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x4000a11b80 linux}"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x40001e7f40 linux}"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x400074e040 linux}"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x40007ee2c0 linux}"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x40007ee700 linux}"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x400074f580 linux}"
	Apr 19 19:45:19 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:19Z" level=error msg="ContainerStats resp: {0x40007eee80 linux}"
	Apr 19 19:45:23 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:23Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 19 19:45:28 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:28Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Apr 19 19:45:29 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:29Z" level=error msg="ContainerStats resp: {0x400079c3c0 linux}"
	Apr 19 19:45:29 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:29Z" level=error msg="ContainerStats resp: {0x400079cd00 linux}"
	Apr 19 19:45:30 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:30Z" level=error msg="ContainerStats resp: {0x40001e78c0 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x400040e0c0 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x40007ef6c0 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x400040ee40 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x400040f240 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x40008c2240 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x400040e600 linux}"
	Apr 19 19:45:31 running-upgrade-311000 cri-dockerd[2676]: time="2024-04-19T19:45:31Z" level=error msg="ContainerStats resp: {0x400040e8c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7dffd66dd095a       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   4880b62380575
	4e82c1cf331a4       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   9ae36fe0c6c28
	123208fd39740       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9ae36fe0c6c28
	7e8d92c948d36       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   4880b62380575
	6464f53916cf8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   182e43f86de94
	1f708eacc69a7       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   8c3f3755b0a93
	4027c73736e52       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   824173cb179ac
	12602e2098e49       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   fbe01b855e10d
	39fcc6afd4b4f       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   6d6932d63dc0c
	8d57504411435       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   1b75437758474
	
	
	==> coredns [123208fd3974] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:54172->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:37045->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:34750->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:48208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:56599->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:42985->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:57007->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1511795838302536533.6190729502487297000. HINFO: read udp 10.244.0.2:55983->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e82c1cf331a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4249668201134625460.986422716811738197. HINFO: read udp 10.244.0.2:59388->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4249668201134625460.986422716811738197. HINFO: read udp 10.244.0.2:49640->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4249668201134625460.986422716811738197. HINFO: read udp 10.244.0.2:55124->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7dffd66dd095] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7687869938044763511.8486008629169056293. HINFO: read udp 10.244.0.3:39647->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7687869938044763511.8486008629169056293. HINFO: read udp 10.244.0.3:34811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7687869938044763511.8486008629169056293. HINFO: read udp 10.244.0.3:49869->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7e8d92c948d3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8261410524411387825.2168682355796463164. HINFO: read udp 10.244.0.3:45928->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8261410524411387825.2168682355796463164. HINFO: read udp 10.244.0.3:41033->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8261410524411387825.2168682355796463164. HINFO: read udp 10.244.0.3:57119->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8261410524411387825.2168682355796463164. HINFO: read udp 10.244.0.3:58366->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8261410524411387825.2168682355796463164. HINFO: read udp 10.244.0.3:42310->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-311000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-311000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b
	                    minikube.k8s.io/name=running-upgrade-311000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_19T12_41_15_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Apr 2024 19:41:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-311000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Apr 2024 19:45:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Apr 2024 19:41:15 +0000   Fri, 19 Apr 2024 19:41:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Apr 2024 19:41:15 +0000   Fri, 19 Apr 2024 19:41:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Apr 2024 19:41:15 +0000   Fri, 19 Apr 2024 19:41:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Apr 2024 19:41:15 +0000   Fri, 19 Apr 2024 19:41:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-311000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 9940e7c07ef344878c40ebc02ad73ace
	  System UUID:                9940e7c07ef344878c40ebc02ad73ace
	  Boot ID:                    079b928b-b459-4b43-abe7-f6e94da1a276
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5brmj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-cmd9z                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-311000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-311000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-311000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-tcc28                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-311000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-311000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-311000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-311000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-311000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-311000 event: Registered Node running-upgrade-311000 in Controller
	
	
	==> dmesg <==
	[  +1.743467] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.059950] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.062708] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.137842] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.075082] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.060291] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.007178] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[  +8.656792] systemd-fstab-generator[1918]: Ignoring "noauto" for root device
	[  +2.620563] systemd-fstab-generator[2194]: Ignoring "noauto" for root device
	[  +0.163354] systemd-fstab-generator[2233]: Ignoring "noauto" for root device
	[  +0.073584] systemd-fstab-generator[2244]: Ignoring "noauto" for root device
	[  +0.079809] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +1.455983] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.124991] systemd-fstab-generator[2633]: Ignoring "noauto" for root device
	[  +0.074274] systemd-fstab-generator[2644]: Ignoring "noauto" for root device
	[  +0.061333] systemd-fstab-generator[2655]: Ignoring "noauto" for root device
	[  +0.094744] systemd-fstab-generator[2669]: Ignoring "noauto" for root device
	[  +2.184875] systemd-fstab-generator[2822]: Ignoring "noauto" for root device
	[Apr19 19:37] systemd-fstab-generator[3198]: Ignoring "noauto" for root device
	[  +1.030043] systemd-fstab-generator[3331]: Ignoring "noauto" for root device
	[ +20.646168] kauditd_printk_skb: 68 callbacks suppressed
	[Apr19 19:41] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.449981] systemd-fstab-generator[11550]: Ignoring "noauto" for root device
	[  +5.614155] systemd-fstab-generator[12170]: Ignoring "noauto" for root device
	[  +0.468653] systemd-fstab-generator[12305]: Ignoring "noauto" for root device
	
	
	==> etcd [12602e2098e4] <==
	{"level":"info","ts":"2024-04-19T19:41:10.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-04-19T19:41:10.920Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-04-19T19:41:10.931Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-19T19:41:10.932Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-19T19:41:10.932Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-19T19:41:10.932Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-19T19:41:10.932Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-04-19T19:41:11.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-19T19:41:11.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-19T19:41:11.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-04-19T19:41:11.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-04-19T19:41:11.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-19T19:41:11.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-04-19T19:41:11.762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-04-19T19:41:11.762Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-311000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-19T19:41:11.762Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T19:41:11.762Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-19T19:41:11.763Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:45:32 up 9 min,  0 users,  load average: 0.18, 0.32, 0.19
	Linux running-upgrade-311000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8d5750441143] <==
	I0419 19:41:13.039540       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0419 19:41:13.040667       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0419 19:41:13.040678       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0419 19:41:13.040700       1 cache.go:39] Caches are synced for autoregister controller
	I0419 19:41:13.061186       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0419 19:41:13.061315       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0419 19:41:13.063331       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0419 19:41:13.778405       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0419 19:41:13.945499       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0419 19:41:13.950404       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0419 19:41:13.950447       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0419 19:41:14.084197       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0419 19:41:14.094917       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0419 19:41:14.204118       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0419 19:41:14.206187       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0419 19:41:14.206566       1 controller.go:611] quota admission added evaluator for: endpoints
	I0419 19:41:14.208476       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0419 19:41:15.077166       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0419 19:41:15.357873       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0419 19:41:15.361132       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0419 19:41:15.378254       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0419 19:41:15.406442       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0419 19:41:28.331744       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0419 19:41:28.730951       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0419 19:41:29.427846       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [39fcc6afd4b4] <==
	I0419 19:41:27.927792       1 shared_informer.go:262] Caches are synced for deployment
	I0419 19:41:27.928866       1 shared_informer.go:262] Caches are synced for GC
	I0419 19:41:27.929983       1 shared_informer.go:262] Caches are synced for PVC protection
	I0419 19:41:27.929991       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0419 19:41:27.931066       1 shared_informer.go:262] Caches are synced for disruption
	I0419 19:41:27.931074       1 disruption.go:371] Sending events to api server.
	I0419 19:41:27.932161       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0419 19:41:27.943997       1 shared_informer.go:262] Caches are synced for taint
	I0419 19:41:27.944061       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0419 19:41:27.944100       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-311000. Assuming now as a timestamp.
	I0419 19:41:27.944255       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0419 19:41:27.944179       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0419 19:41:27.944381       1 event.go:294] "Event occurred" object="running-upgrade-311000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-311000 event: Registered Node running-upgrade-311000 in Controller"
	I0419 19:41:28.032318       1 shared_informer.go:262] Caches are synced for resource quota
	I0419 19:41:28.077692       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0419 19:41:28.092777       1 shared_informer.go:262] Caches are synced for resource quota
	I0419 19:41:28.102503       1 shared_informer.go:262] Caches are synced for crt configmap
	I0419 19:41:28.154845       1 shared_informer.go:262] Caches are synced for attach detach
	I0419 19:41:28.335632       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tcc28"
	I0419 19:41:28.552272       1 shared_informer.go:262] Caches are synced for garbage collector
	I0419 19:41:28.563961       1 shared_informer.go:262] Caches are synced for garbage collector
	I0419 19:41:28.563994       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0419 19:41:28.732382       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0419 19:41:28.932643       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5brmj"
	I0419 19:41:28.936522       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-cmd9z"
	
	
	==> kube-proxy [1f708eacc69a] <==
	I0419 19:41:29.416998       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0419 19:41:29.417024       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0419 19:41:29.417044       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0419 19:41:29.425928       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0419 19:41:29.425941       1 server_others.go:206] "Using iptables Proxier"
	I0419 19:41:29.425954       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0419 19:41:29.426082       1 server.go:661] "Version info" version="v1.24.1"
	I0419 19:41:29.426104       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0419 19:41:29.426378       1 config.go:317] "Starting service config controller"
	I0419 19:41:29.426434       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0419 19:41:29.426446       1 config.go:226] "Starting endpoint slice config controller"
	I0419 19:41:29.426469       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0419 19:41:29.426848       1 config.go:444] "Starting node config controller"
	I0419 19:41:29.426852       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0419 19:41:29.527151       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0419 19:41:29.527176       1 shared_informer.go:262] Caches are synced for service config
	I0419 19:41:29.527283       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [4027c73736e5] <==
	W0419 19:41:12.995708       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0419 19:41:12.995727       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 19:41:12.995738       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0419 19:41:12.995730       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0419 19:41:12.995763       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0419 19:41:12.995770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0419 19:41:12.995797       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0419 19:41:12.995804       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0419 19:41:12.995851       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 19:41:12.995858       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 19:41:12.995909       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0419 19:41:12.995939       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0419 19:41:12.995910       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 19:41:12.995975       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0419 19:41:13.833694       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0419 19:41:13.833758       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0419 19:41:13.886229       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0419 19:41:13.886288       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0419 19:41:13.894712       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0419 19:41:13.894884       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0419 19:41:13.913035       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0419 19:41:13.913084       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0419 19:41:13.924597       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0419 19:41:13.924941       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0419 19:41:14.393587       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-04-19 19:36:29 UTC, ends at Fri 2024-04-19 19:45:32 UTC. --
	Apr 19 19:41:27 running-upgrade-311000 kubelet[12176]: I0419 19:41:27.949121   12176 topology_manager.go:200] "Topology Admit Handler"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.108265   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2b9b89b7-8c70-4978-a499-959ea57a3682-tmp\") pod \"storage-provisioner\" (UID: \"2b9b89b7-8c70-4978-a499-959ea57a3682\") " pod="kube-system/storage-provisioner"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.108296   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bwb\" (UniqueName: \"kubernetes.io/projected/2b9b89b7-8c70-4978-a499-959ea57a3682-kube-api-access-92bwb\") pod \"storage-provisioner\" (UID: \"2b9b89b7-8c70-4978-a499-959ea57a3682\") " pod="kube-system/storage-provisioner"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.211944   12176 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.211983   12176 projected.go:192] Error preparing data for projected volume kube-api-access-92bwb for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.212015   12176 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2b9b89b7-8c70-4978-a499-959ea57a3682-kube-api-access-92bwb podName:2b9b89b7-8c70-4978-a499-959ea57a3682 nodeName:}" failed. No retries permitted until 2024-04-19 19:41:28.712003212 +0000 UTC m=+13.363727972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-92bwb" (UniqueName: "kubernetes.io/projected/2b9b89b7-8c70-4978-a499-959ea57a3682-kube-api-access-92bwb") pod "storage-provisioner" (UID: "2b9b89b7-8c70-4978-a499-959ea57a3682") : configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.339721   12176 topology_manager.go:200] "Topology Admit Handler"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.511540   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zs9\" (UniqueName: \"kubernetes.io/projected/88fe1d26-ba4e-4f5e-b455-3086e84953de-kube-api-access-x6zs9\") pod \"kube-proxy-tcc28\" (UID: \"88fe1d26-ba4e-4f5e-b455-3086e84953de\") " pod="kube-system/kube-proxy-tcc28"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.511643   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88fe1d26-ba4e-4f5e-b455-3086e84953de-kube-proxy\") pod \"kube-proxy-tcc28\" (UID: \"88fe1d26-ba4e-4f5e-b455-3086e84953de\") " pod="kube-system/kube-proxy-tcc28"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.511657   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88fe1d26-ba4e-4f5e-b455-3086e84953de-xtables-lock\") pod \"kube-proxy-tcc28\" (UID: \"88fe1d26-ba4e-4f5e-b455-3086e84953de\") " pod="kube-system/kube-proxy-tcc28"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.511670   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88fe1d26-ba4e-4f5e-b455-3086e84953de-lib-modules\") pod \"kube-proxy-tcc28\" (UID: \"88fe1d26-ba4e-4f5e-b455-3086e84953de\") " pod="kube-system/kube-proxy-tcc28"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.615458   12176 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.615474   12176 projected.go:192] Error preparing data for projected volume kube-api-access-x6zs9 for pod kube-system/kube-proxy-tcc28: configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.615497   12176 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/88fe1d26-ba4e-4f5e-b455-3086e84953de-kube-api-access-x6zs9 podName:88fe1d26-ba4e-4f5e-b455-3086e84953de nodeName:}" failed. No retries permitted until 2024-04-19 19:41:29.115488521 +0000 UTC m=+13.767213323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x6zs9" (UniqueName: "kubernetes.io/projected/88fe1d26-ba4e-4f5e-b455-3086e84953de-kube-api-access-x6zs9") pod "kube-proxy-tcc28" (UID: "88fe1d26-ba4e-4f5e-b455-3086e84953de") : configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.714122   12176 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.714144   12176 projected.go:192] Error preparing data for projected volume kube-api-access-92bwb for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: E0419 19:41:28.714433   12176 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2b9b89b7-8c70-4978-a499-959ea57a3682-kube-api-access-92bwb podName:2b9b89b7-8c70-4978-a499-959ea57a3682 nodeName:}" failed. No retries permitted until 2024-04-19 19:41:29.714423202 +0000 UTC m=+14.366148003 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-92bwb" (UniqueName: "kubernetes.io/projected/2b9b89b7-8c70-4978-a499-959ea57a3682-kube-api-access-92bwb") pod "storage-provisioner" (UID: "2b9b89b7-8c70-4978-a499-959ea57a3682") : configmap "kube-root-ca.crt" not found
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.944699   12176 topology_manager.go:200] "Topology Admit Handler"
	Apr 19 19:41:28 running-upgrade-311000 kubelet[12176]: I0419 19:41:28.944965   12176 topology_manager.go:200] "Topology Admit Handler"
	Apr 19 19:41:29 running-upgrade-311000 kubelet[12176]: I0419 19:41:29.118168   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08b30f5b-a267-4c7e-8336-79a31da516e6-config-volume\") pod \"coredns-6d4b75cb6d-5brmj\" (UID: \"08b30f5b-a267-4c7e-8336-79a31da516e6\") " pod="kube-system/coredns-6d4b75cb6d-5brmj"
	Apr 19 19:41:29 running-upgrade-311000 kubelet[12176]: I0419 19:41:29.118290   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vgl\" (UniqueName: \"kubernetes.io/projected/08b30f5b-a267-4c7e-8336-79a31da516e6-kube-api-access-h4vgl\") pod \"coredns-6d4b75cb6d-5brmj\" (UID: \"08b30f5b-a267-4c7e-8336-79a31da516e6\") " pod="kube-system/coredns-6d4b75cb6d-5brmj"
	Apr 19 19:41:29 running-upgrade-311000 kubelet[12176]: I0419 19:41:29.118311   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/451c0817-780e-4940-ba08-6f28ea43c168-config-volume\") pod \"coredns-6d4b75cb6d-cmd9z\" (UID: \"451c0817-780e-4940-ba08-6f28ea43c168\") " pod="kube-system/coredns-6d4b75cb6d-cmd9z"
	Apr 19 19:41:29 running-upgrade-311000 kubelet[12176]: I0419 19:41:29.118322   12176 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7w8j\" (UniqueName: \"kubernetes.io/projected/451c0817-780e-4940-ba08-6f28ea43c168-kube-api-access-s7w8j\") pod \"coredns-6d4b75cb6d-cmd9z\" (UID: \"451c0817-780e-4940-ba08-6f28ea43c168\") " pod="kube-system/coredns-6d4b75cb6d-cmd9z"
	Apr 19 19:45:17 running-upgrade-311000 kubelet[12176]: I0419 19:45:17.161598   12176 scope.go:110] "RemoveContainer" containerID="c0251d75bd38b9dc7b2a3c9809b3545ac22083fe89819f07ac4d3a931825bcdf"
	Apr 19 19:45:17 running-upgrade-311000 kubelet[12176]: I0419 19:45:17.170261   12176 scope.go:110] "RemoveContainer" containerID="d044b3c4661d16901d53050ae7bb2a2db04d259c7116661c62e623e62c6f6dfa"
	
	
	==> storage-provisioner [6464f53916cf] <==
	I0419 19:41:30.013892       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0419 19:41:30.020002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0419 19:41:30.020018       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0419 19:41:30.023986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0419 19:41:30.024229       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-311000_992b0a7a-0aa8-4207-aaea-0a958d59af82!
	I0419 19:41:30.024037       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfcae8df-a4ca-487c-b351-ccd2a81ff011", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-311000_992b0a7a-0aa8-4207-aaea-0a958d59af82 became leader
	I0419 19:41:30.124798       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-311000_992b0a7a-0aa8-4207-aaea-0a958d59af82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-311000 -n running-upgrade-311000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-311000 -n running-upgrade-311000: exit status 2 (15.782242542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-311000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-311000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-311000
--- FAIL: TestRunningBinaryUpgrade (583.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-777000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-777000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.880375417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-777000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-777000" primary control-plane node in "kubernetes-upgrade-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:39:05.154295    9216 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:39:05.154430    9216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:39:05.154434    9216 out.go:304] Setting ErrFile to fd 2...
	I0419 12:39:05.154437    9216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:39:05.154556    9216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:39:05.155824    9216 out.go:298] Setting JSON to false
	I0419 12:39:05.173447    9216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5916,"bootTime":1713549629,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:39:05.173536    9216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:39:05.177566    9216 out.go:177] * [kubernetes-upgrade-777000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:39:05.184649    9216 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:39:05.188592    9216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:39:05.184763    9216 notify.go:220] Checking for updates...
	I0419 12:39:05.194668    9216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:39:05.197502    9216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:39:05.200588    9216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:39:05.206487    9216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:39:05.210007    9216 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:39:05.210071    9216 config.go:182] Loaded profile config "running-upgrade-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:39:05.210121    9216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:39:05.214512    9216 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:39:05.221619    9216 start.go:297] selected driver: qemu2
	I0419 12:39:05.221629    9216 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:39:05.221636    9216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:39:05.223843    9216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:39:05.227647    9216 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:39:05.230632    9216 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 12:39:05.230655    9216 cni.go:84] Creating CNI manager for ""
	I0419 12:39:05.230662    9216 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0419 12:39:05.230682    9216 start.go:340] cluster config:
	{Name:kubernetes-upgrade-777000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:39:05.235207    9216 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:39:05.242430    9216 out.go:177] * Starting "kubernetes-upgrade-777000" primary control-plane node in "kubernetes-upgrade-777000" cluster
	I0419 12:39:05.246629    9216 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:39:05.246646    9216 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:39:05.246654    9216 cache.go:56] Caching tarball of preloaded images
	I0419 12:39:05.246725    9216 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:39:05.246730    9216 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0419 12:39:05.246779    9216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kubernetes-upgrade-777000/config.json ...
	I0419 12:39:05.246794    9216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kubernetes-upgrade-777000/config.json: {Name:mk39e956abb228cce1c0901b27210770372ca5ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:39:05.247028    9216 start.go:360] acquireMachinesLock for kubernetes-upgrade-777000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:39:05.247059    9216 start.go:364] duration metric: took 25µs to acquireMachinesLock for "kubernetes-upgrade-777000"
	I0419 12:39:05.247069    9216 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:39:05.247096    9216 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:39:05.250568    9216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:39:05.274516    9216 start.go:159] libmachine.API.Create for "kubernetes-upgrade-777000" (driver="qemu2")
	I0419 12:39:05.274547    9216 client.go:168] LocalClient.Create starting
	I0419 12:39:05.274619    9216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:39:05.274658    9216 main.go:141] libmachine: Decoding PEM data...
	I0419 12:39:05.274668    9216 main.go:141] libmachine: Parsing certificate...
	I0419 12:39:05.274714    9216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:39:05.274736    9216 main.go:141] libmachine: Decoding PEM data...
	I0419 12:39:05.274742    9216 main.go:141] libmachine: Parsing certificate...
	I0419 12:39:05.275081    9216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:39:05.408004    9216 main.go:141] libmachine: Creating SSH key...
	I0419 12:39:05.604448    9216 main.go:141] libmachine: Creating Disk image...
	I0419 12:39:05.604460    9216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:39:05.604654    9216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:05.617770    9216 main.go:141] libmachine: STDOUT: 
	I0419 12:39:05.617793    9216 main.go:141] libmachine: STDERR: 
	I0419 12:39:05.617882    9216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2 +20000M
	I0419 12:39:05.628834    9216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:39:05.628853    9216 main.go:141] libmachine: STDERR: 
	I0419 12:39:05.628870    9216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:05.628877    9216 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:39:05.628913    9216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f0:27:9f:03:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:05.630682    9216 main.go:141] libmachine: STDOUT: 
	I0419 12:39:05.630700    9216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:39:05.630723    9216 client.go:171] duration metric: took 356.177666ms to LocalClient.Create
	I0419 12:39:07.632914    9216 start.go:128] duration metric: took 2.385844292s to createHost
	I0419 12:39:07.632990    9216 start.go:83] releasing machines lock for "kubernetes-upgrade-777000", held for 2.38597725s
	W0419 12:39:07.633029    9216 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:39:07.644319    9216 out.go:177] * Deleting "kubernetes-upgrade-777000" in qemu2 ...
	W0419 12:39:07.665315    9216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:39:07.665345    9216 start.go:728] Will try again in 5 seconds ...
	I0419 12:39:12.667498    9216 start.go:360] acquireMachinesLock for kubernetes-upgrade-777000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:39:12.668095    9216 start.go:364] duration metric: took 452.917µs to acquireMachinesLock for "kubernetes-upgrade-777000"
	I0419 12:39:12.668179    9216 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:39:12.668455    9216 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:39:12.676155    9216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:39:12.724214    9216 start.go:159] libmachine.API.Create for "kubernetes-upgrade-777000" (driver="qemu2")
	I0419 12:39:12.724265    9216 client.go:168] LocalClient.Create starting
	I0419 12:39:12.724390    9216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:39:12.724471    9216 main.go:141] libmachine: Decoding PEM data...
	I0419 12:39:12.724488    9216 main.go:141] libmachine: Parsing certificate...
	I0419 12:39:12.724545    9216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:39:12.724591    9216 main.go:141] libmachine: Decoding PEM data...
	I0419 12:39:12.724601    9216 main.go:141] libmachine: Parsing certificate...
	I0419 12:39:12.725228    9216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:39:12.855722    9216 main.go:141] libmachine: Creating SSH key...
	I0419 12:39:12.936753    9216 main.go:141] libmachine: Creating Disk image...
	I0419 12:39:12.936760    9216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:39:12.936949    9216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:12.949943    9216 main.go:141] libmachine: STDOUT: 
	I0419 12:39:12.949973    9216 main.go:141] libmachine: STDERR: 
	I0419 12:39:12.950026    9216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2 +20000M
	I0419 12:39:12.961233    9216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:39:12.961251    9216 main.go:141] libmachine: STDERR: 
	I0419 12:39:12.961263    9216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:12.961268    9216 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:39:12.961296    9216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:82:8d:cd:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:12.963150    9216 main.go:141] libmachine: STDOUT: 
	I0419 12:39:12.963168    9216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:39:12.963187    9216 client.go:171] duration metric: took 238.92275ms to LocalClient.Create
	I0419 12:39:14.965353    9216 start.go:128] duration metric: took 2.296906875s to createHost
	I0419 12:39:14.965524    9216 start.go:83] releasing machines lock for "kubernetes-upgrade-777000", held for 2.29739275s
	W0419 12:39:14.965881    9216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:39:14.975500    9216 out.go:177] 
	W0419 12:39:14.980686    9216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:39:14.980714    9216 out.go:239] * 
	* 
	W0419 12:39:14.983259    9216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:39:14.990614    9216 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-777000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-777000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-777000: (3.486724875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-777000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-777000 status --format={{.Host}}: exit status 7 (64.100542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-777000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-777000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.192272333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-777000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-777000" primary control-plane node in "kubernetes-upgrade-777000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-777000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-777000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:39:18.589324    9252 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:39:18.589449    9252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:39:18.589455    9252 out.go:304] Setting ErrFile to fd 2...
	I0419 12:39:18.589457    9252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:39:18.589586    9252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:39:18.590594    9252 out.go:298] Setting JSON to false
	I0419 12:39:18.606948    9252 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5929,"bootTime":1713549629,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:39:18.607024    9252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:39:18.611580    9252 out.go:177] * [kubernetes-upgrade-777000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:39:18.618596    9252 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:39:18.618710    9252 notify.go:220] Checking for updates...
	I0419 12:39:18.626518    9252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:39:18.634502    9252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:39:18.638570    9252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:39:18.641602    9252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:39:18.644569    9252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:39:18.647925    9252 config.go:182] Loaded profile config "kubernetes-upgrade-777000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0419 12:39:18.648192    9252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:39:18.652397    9252 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:39:18.659533    9252 start.go:297] selected driver: qemu2
	I0419 12:39:18.659544    9252 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:39:18.659608    9252 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:39:18.661982    9252 cni.go:84] Creating CNI manager for ""
	I0419 12:39:18.662001    9252 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:39:18.662026    9252 start.go:340] cluster config:
	{Name:kubernetes-upgrade-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-777000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:39:18.666342    9252 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:39:18.674548    9252 out.go:177] * Starting "kubernetes-upgrade-777000" primary control-plane node in "kubernetes-upgrade-777000" cluster
	I0419 12:39:18.678601    9252 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:39:18.678620    9252 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:39:18.678630    9252 cache.go:56] Caching tarball of preloaded images
	I0419 12:39:18.678687    9252 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:39:18.678692    9252 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:39:18.678736    9252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kubernetes-upgrade-777000/config.json ...
	I0419 12:39:18.679188    9252 start.go:360] acquireMachinesLock for kubernetes-upgrade-777000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:39:18.679215    9252 start.go:364] duration metric: took 20.875µs to acquireMachinesLock for "kubernetes-upgrade-777000"
	I0419 12:39:18.679225    9252 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:39:18.679230    9252 fix.go:54] fixHost starting: 
	I0419 12:39:18.679340    9252 fix.go:112] recreateIfNeeded on kubernetes-upgrade-777000: state=Stopped err=<nil>
	W0419 12:39:18.679349    9252 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:39:18.687500    9252 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-777000" ...
	I0419 12:39:18.691495    9252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:82:8d:cd:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:18.693418    9252 main.go:141] libmachine: STDOUT: 
	I0419 12:39:18.693433    9252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:39:18.693456    9252 fix.go:56] duration metric: took 14.225291ms for fixHost
	I0419 12:39:18.693461    9252 start.go:83] releasing machines lock for "kubernetes-upgrade-777000", held for 14.241417ms
	W0419 12:39:18.693466    9252 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:39:18.693496    9252 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:39:18.693500    9252 start.go:728] Will try again in 5 seconds ...
	I0419 12:39:23.695681    9252 start.go:360] acquireMachinesLock for kubernetes-upgrade-777000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:39:23.696189    9252 start.go:364] duration metric: took 377.583µs to acquireMachinesLock for "kubernetes-upgrade-777000"
	I0419 12:39:23.696269    9252 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:39:23.696286    9252 fix.go:54] fixHost starting: 
	I0419 12:39:23.696939    9252 fix.go:112] recreateIfNeeded on kubernetes-upgrade-777000: state=Stopped err=<nil>
	W0419 12:39:23.696967    9252 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:39:23.705663    9252 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-777000" ...
	I0419 12:39:23.708865    9252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:82:8d:cd:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubernetes-upgrade-777000/disk.qcow2
	I0419 12:39:23.715792    9252 main.go:141] libmachine: STDOUT: 
	I0419 12:39:23.715855    9252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:39:23.715912    9252 fix.go:56] duration metric: took 19.630709ms for fixHost
	I0419 12:39:23.715929    9252 start.go:83] releasing machines lock for "kubernetes-upgrade-777000", held for 19.71675ms
	W0419 12:39:23.716107    9252 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-777000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-777000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:39:23.723655    9252 out.go:177] 
	W0419 12:39:23.726758    9252 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:39:23.726782    9252 out.go:239] * 
	* 
	W0419 12:39:23.727789    9252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:39:23.740732    9252 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-777000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-777000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-777000 version --output=json: exit status 1 (35.738209ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-777000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-04-19 12:39:23.786242 -0700 PDT m=+973.353163501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-777000 -n kubernetes-upgrade-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-777000 -n kubernetes-upgrade-777000: exit status 7 (32.356416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-777000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-777000
--- FAIL: TestKubernetesUpgrade (18.79s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18669
- KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current675788190/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18669
- KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2117168387/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1717205742 start -p stopped-upgrade-860000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1717205742 start -p stopped-upgrade-860000 --memory=2200 --vm-driver=qemu2 : (40.616464709s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1717205742 -p stopped-upgrade-860000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1717205742 -p stopped-upgrade-860000 stop: (12.116778959s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-860000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-860000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.618527042s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-860000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-860000" primary control-plane node in "stopped-upgrade-860000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-860000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:40:17.739640    9295 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:40:17.739783    9295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:40:17.739787    9295 out.go:304] Setting ErrFile to fd 2...
	I0419 12:40:17.739790    9295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:40:17.739936    9295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:40:17.741014    9295 out.go:298] Setting JSON to false
	I0419 12:40:17.759699    9295 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5988,"bootTime":1713549629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:40:17.759764    9295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:40:17.764654    9295 out.go:177] * [stopped-upgrade-860000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:40:17.770669    9295 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:40:17.770745    9295 notify.go:220] Checking for updates...
	I0419 12:40:17.774601    9295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:40:17.777606    9295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:40:17.780672    9295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:40:17.783612    9295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:40:17.786648    9295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:40:17.789984    9295 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:40:17.793544    9295 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0419 12:40:17.796626    9295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:40:17.800609    9295 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:40:17.807649    9295 start.go:297] selected driver: qemu2
	I0419 12:40:17.807656    9295 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:40:17.807718    9295 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:40:17.810339    9295 cni.go:84] Creating CNI manager for ""
	I0419 12:40:17.810364    9295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:40:17.810402    9295 start.go:340] cluster config:
	{Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:40:17.810455    9295 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:40:17.815562    9295 out.go:177] * Starting "stopped-upgrade-860000" primary control-plane node in "stopped-upgrade-860000" cluster
	I0419 12:40:17.819606    9295 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0419 12:40:17.819622    9295 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0419 12:40:17.819629    9295 cache.go:56] Caching tarball of preloaded images
	I0419 12:40:17.819702    9295 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:40:17.819707    9295 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0419 12:40:17.819769    9295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/config.json ...
	I0419 12:40:17.820227    9295 start.go:360] acquireMachinesLock for stopped-upgrade-860000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:40:17.820283    9295 start.go:364] duration metric: took 47.167µs to acquireMachinesLock for "stopped-upgrade-860000"
	I0419 12:40:17.820293    9295 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:40:17.820297    9295 fix.go:54] fixHost starting: 
	I0419 12:40:17.820419    9295 fix.go:112] recreateIfNeeded on stopped-upgrade-860000: state=Stopped err=<nil>
	W0419 12:40:17.820428    9295 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:40:17.828588    9295 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-860000" ...
	I0419 12:40:17.832516    9295 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51412-:22,hostfwd=tcp::51413-:2376,hostname=stopped-upgrade-860000 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/disk.qcow2
	I0419 12:40:17.880681    9295 main.go:141] libmachine: STDOUT: 
	I0419 12:40:17.880703    9295 main.go:141] libmachine: STDERR: 
	I0419 12:40:17.880709    9295 main.go:141] libmachine: Waiting for VM to start (ssh -p 51412 docker@127.0.0.1)...
	I0419 12:40:38.066210    9295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/config.json ...
	I0419 12:40:38.066927    9295 machine.go:94] provisionDockerMachine start ...
	I0419 12:40:38.067019    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.067342    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.067354    9295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0419 12:40:38.145023    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0419 12:40:38.145055    9295 buildroot.go:166] provisioning hostname "stopped-upgrade-860000"
	I0419 12:40:38.145135    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.145382    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.145395    9295 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-860000 && echo "stopped-upgrade-860000" | sudo tee /etc/hostname
	I0419 12:40:38.218738    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-860000
	
	I0419 12:40:38.218814    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.218987    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.219000    9295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-860000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-860000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-860000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0419 12:40:38.282803    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0419 12:40:38.282818    9295 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18669-6895/.minikube CaCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18669-6895/.minikube}
	I0419 12:40:38.282827    9295 buildroot.go:174] setting up certificates
	I0419 12:40:38.282838    9295 provision.go:84] configureAuth start
	I0419 12:40:38.282843    9295 provision.go:143] copyHostCerts
	I0419 12:40:38.282929    9295 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem, removing ...
	I0419 12:40:38.282937    9295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem
	I0419 12:40:38.283046    9295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.pem (1078 bytes)
	I0419 12:40:38.283252    9295 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem, removing ...
	I0419 12:40:38.283257    9295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem
	I0419 12:40:38.283311    9295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/cert.pem (1123 bytes)
	I0419 12:40:38.283428    9295 exec_runner.go:144] found /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem, removing ...
	I0419 12:40:38.283432    9295 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem
	I0419 12:40:38.283482    9295 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18669-6895/.minikube/key.pem (1679 bytes)
	I0419 12:40:38.283573    9295 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-860000 san=[127.0.0.1 localhost minikube stopped-upgrade-860000]
	I0419 12:40:38.352784    9295 provision.go:177] copyRemoteCerts
	I0419 12:40:38.352826    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0419 12:40:38.352834    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:40:38.384349    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0419 12:40:38.391321    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0419 12:40:38.398508    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0419 12:40:38.405794    9295 provision.go:87] duration metric: took 122.949792ms to configureAuth
	I0419 12:40:38.405803    9295 buildroot.go:189] setting minikube options for container-runtime
	I0419 12:40:38.405929    9295 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:40:38.405964    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.406060    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.406065    9295 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0419 12:40:38.463375    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0419 12:40:38.463382    9295 buildroot.go:70] root file system type: tmpfs
	I0419 12:40:38.463437    9295 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0419 12:40:38.463477    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.463621    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.463657    9295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0419 12:40:38.528240    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0419 12:40:38.528293    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.528410    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.528422    9295 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0419 12:40:38.866061    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0419 12:40:38.866074    9295 machine.go:97] duration metric: took 799.147041ms to provisionDockerMachine
	I0419 12:40:38.866080    9295 start.go:293] postStartSetup for "stopped-upgrade-860000" (driver="qemu2")
	I0419 12:40:38.866086    9295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0419 12:40:38.866161    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0419 12:40:38.866171    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:40:38.897832    9295 ssh_runner.go:195] Run: cat /etc/os-release
	I0419 12:40:38.899424    9295 info.go:137] Remote host: Buildroot 2021.02.12
	I0419 12:40:38.899434    9295 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18669-6895/.minikube/addons for local assets ...
	I0419 12:40:38.899517    9295 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18669-6895/.minikube/files for local assets ...
	I0419 12:40:38.899634    9295 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem -> 73042.pem in /etc/ssl/certs
	I0419 12:40:38.899764    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0419 12:40:38.902506    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem --> /etc/ssl/certs/73042.pem (1708 bytes)
	I0419 12:40:38.909837    9295 start.go:296] duration metric: took 43.753167ms for postStartSetup
	I0419 12:40:38.909851    9295 fix.go:56] duration metric: took 21.089818042s for fixHost
	I0419 12:40:38.909886    9295 main.go:141] libmachine: Using SSH client type: native
	I0419 12:40:38.909990    9295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033a5c80] 0x1033a84e0 <nil>  [] 0s} localhost 51412 <nil> <nil>}
	I0419 12:40:38.909995    9295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0419 12:40:38.967841    9295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713555639.037568046
	
	I0419 12:40:38.967849    9295 fix.go:216] guest clock: 1713555639.037568046
	I0419 12:40:38.967853    9295 fix.go:229] Guest: 2024-04-19 12:40:39.037568046 -0700 PDT Remote: 2024-04-19 12:40:38.909853 -0700 PDT m=+21.204413251 (delta=127.715046ms)
	I0419 12:40:38.967863    9295 fix.go:200] guest clock delta is within tolerance: 127.715046ms
	I0419 12:40:38.967865    9295 start.go:83] releasing machines lock for "stopped-upgrade-860000", held for 21.1478425s
	I0419 12:40:38.967916    9295 ssh_runner.go:195] Run: cat /version.json
	I0419 12:40:38.967922    9295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0419 12:40:38.967923    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:40:38.967941    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	W0419 12:40:38.968505    9295 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51412: connect: connection refused
	I0419 12:40:38.968528    9295 retry.go:31] will retry after 131.837075ms: dial tcp [::1]:51412: connect: connection refused
	W0419 12:40:39.136535    9295 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0419 12:40:39.136599    9295 ssh_runner.go:195] Run: systemctl --version
	I0419 12:40:39.138858    9295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0419 12:40:39.141895    9295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0419 12:40:39.141929    9295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0419 12:40:39.145818    9295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0419 12:40:39.158679    9295 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0419 12:40:39.158693    9295 start.go:494] detecting cgroup driver to use...
	I0419 12:40:39.158788    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 12:40:39.166818    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0419 12:40:39.170151    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0419 12:40:39.173105    9295 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0419 12:40:39.173143    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0419 12:40:39.176262    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 12:40:39.179251    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0419 12:40:39.182301    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0419 12:40:39.187797    9295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0419 12:40:39.191364    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0419 12:40:39.195533    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0419 12:40:39.200678    9295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0419 12:40:39.203705    9295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0419 12:40:39.206778    9295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0419 12:40:39.210055    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:39.267025    9295 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0419 12:40:39.277674    9295 start.go:494] detecting cgroup driver to use...
	I0419 12:40:39.277757    9295 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0419 12:40:39.282753    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 12:40:39.287905    9295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0419 12:40:39.297608    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0419 12:40:39.302445    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 12:40:39.307325    9295 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0419 12:40:39.350222    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0419 12:40:39.355535    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0419 12:40:39.360873    9295 ssh_runner.go:195] Run: which cri-dockerd
	I0419 12:40:39.362088    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0419 12:40:39.364719    9295 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0419 12:40:39.369628    9295 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0419 12:40:39.433534    9295 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0419 12:40:39.502418    9295 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0419 12:40:39.502469    9295 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0419 12:40:39.507845    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:39.568854    9295 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 12:40:40.699756    9295 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.130899667s)
	I0419 12:40:40.699824    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0419 12:40:40.704742    9295 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0419 12:40:40.711268    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 12:40:40.716160    9295 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0419 12:40:40.768755    9295 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0419 12:40:40.828148    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:40.888822    9295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0419 12:40:40.894848    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0419 12:40:40.899833    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:40.970195    9295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0419 12:40:41.009689    9295 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0419 12:40:41.009772    9295 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0419 12:40:41.011874    9295 start.go:562] Will wait 60s for crictl version
	I0419 12:40:41.011932    9295 ssh_runner.go:195] Run: which crictl
	I0419 12:40:41.013686    9295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0419 12:40:41.028410    9295 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0419 12:40:41.028490    9295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 12:40:41.045199    9295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0419 12:40:41.068587    9295 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0419 12:40:41.068705    9295 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0419 12:40:41.070017    9295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 12:40:41.073799    9295 kubeadm.go:877] updating cluster {Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0419 12:40:41.073849    9295 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0419 12:40:41.073886    9295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 12:40:41.084515    9295 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 12:40:41.084529    9295 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0419 12:40:41.084571    9295 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 12:40:41.087401    9295 ssh_runner.go:195] Run: which lz4
	I0419 12:40:41.088766    9295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0419 12:40:41.089848    9295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0419 12:40:41.089857    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0419 12:40:41.838317    9295 docker.go:649] duration metric: took 749.598291ms to copy over tarball
	I0419 12:40:41.838379    9295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0419 12:40:43.000758    9295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162380833s)
	I0419 12:40:43.000775    9295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0419 12:40:43.016069    9295 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0419 12:40:43.018944    9295 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0419 12:40:43.023904    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:43.086289    9295 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0419 12:40:44.770321    9295 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.684043292s)
	I0419 12:40:44.770418    9295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0419 12:40:44.786884    9295 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0419 12:40:44.786894    9295 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0419 12:40:44.786901    9295 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0419 12:40:44.794884    9295 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:44.794936    9295 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0419 12:40:44.795017    9295 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:44.795123    9295 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:44.795158    9295 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:44.795232    9295 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:44.795672    9295 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:44.795854    9295 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:44.803677    9295 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:44.803866    9295 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:44.803976    9295 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:44.804060    9295 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:44.804091    9295 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:44.804299    9295 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:44.804531    9295 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:44.804530    9295 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0419 12:40:45.214298    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:45.225369    9295 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0419 12:40:45.225398    9295 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:45.225448    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0419 12:40:45.235457    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0419 12:40:45.246341    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:45.256524    9295 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0419 12:40:45.256546    9295 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:45.256589    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0419 12:40:45.257822    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:45.270489    9295 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0419 12:40:45.270508    9295 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:45.270556    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0419 12:40:45.270588    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0419 12:40:45.280541    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0419 12:40:45.299338    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0419 12:40:45.300801    9295 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0419 12:40:45.300891    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:45.309307    9295 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0419 12:40:45.309327    9295 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:45.309378    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0419 12:40:45.319181    9295 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0419 12:40:45.319204    9295 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:45.319252    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0419 12:40:45.319269    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0419 12:40:45.328521    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:45.329624    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0419 12:40:45.330820    9295 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0419 12:40:45.341830    9295 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0419 12:40:45.341850    9295 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:45.341830    9295 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0419 12:40:45.341889    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0419 12:40:45.341908    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0419 12:40:45.343723    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0419 12:40:45.370828    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0419 12:40:45.370877    9295 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0419 12:40:45.370900    9295 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0419 12:40:45.370952    9295 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0419 12:40:45.389797    9295 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0419 12:40:45.389812    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0419 12:40:45.401123    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0419 12:40:45.401264    9295 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0419 12:40:45.435611    9295 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0419 12:40:45.435649    9295 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0419 12:40:45.435670    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0419 12:40:45.442524    9295 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0419 12:40:45.442534    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0419 12:40:45.468003    9295 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0419 12:40:45.611242    9295 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0419 12:40:45.611337    9295 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:45.621895    9295 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0419 12:40:45.621919    9295 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:45.621973    9295 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:40:45.636474    9295 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0419 12:40:45.636617    9295 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0419 12:40:45.638080    9295 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0419 12:40:45.638097    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0419 12:40:45.662778    9295 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0419 12:40:45.662797    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0419 12:40:45.905524    9295 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0419 12:40:45.905569    9295 cache_images.go:92] duration metric: took 1.118681792s to LoadCachedImages
	W0419 12:40:45.905609    9295 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0419 12:40:45.905615    9295 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0419 12:40:45.905666    9295 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-860000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0419 12:40:45.905721    9295 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0419 12:40:45.919566    9295 cni.go:84] Creating CNI manager for ""
	I0419 12:40:45.919578    9295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:40:45.919582    9295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0419 12:40:45.919593    9295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-860000 NodeName:stopped-upgrade-860000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0419 12:40:45.919658    9295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-860000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0419 12:40:45.919711    9295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0419 12:40:45.922868    9295 binaries.go:44] Found k8s binaries, skipping transfer
	I0419 12:40:45.922894    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0419 12:40:45.925939    9295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0419 12:40:45.930976    9295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0419 12:40:45.936017    9295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0419 12:40:45.940912    9295 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0419 12:40:45.942069    9295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0419 12:40:45.945932    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:40:46.010942    9295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:40:46.017399    9295 certs.go:68] Setting up /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000 for IP: 10.0.2.15
	I0419 12:40:46.017407    9295 certs.go:194] generating shared ca certs ...
	I0419 12:40:46.017416    9295 certs.go:226] acquiring lock for ca certs: {Name:mke38b98dd5558382d381a0a6e0e324ad9664707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.017581    9295 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.key
	I0419 12:40:46.017629    9295 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.key
	I0419 12:40:46.017635    9295 certs.go:256] generating profile certs ...
	I0419 12:40:46.017710    9295 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.key
	I0419 12:40:46.017729    9295 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7
	I0419 12:40:46.017741    9295 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0419 12:40:46.136552    9295 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7 ...
	I0419 12:40:46.136568    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7: {Name:mk0761eb88abc89e7c785f10ca01a4f153b316ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.136890    9295 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7 ...
	I0419 12:40:46.136895    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7: {Name:mkbf53f0ffca4dce5ad5fa220496f7f4a08a3405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.137025    9295 certs.go:381] copying /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt.8352d7f7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt
	I0419 12:40:46.137175    9295 certs.go:385] copying /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key.8352d7f7 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key
	I0419 12:40:46.137335    9295 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/proxy-client.key
	I0419 12:40:46.137466    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304.pem (1338 bytes)
	W0419 12:40:46.137500    9295 certs.go:480] ignoring /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304_empty.pem, impossibly tiny 0 bytes
	I0419 12:40:46.137506    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca-key.pem (1679 bytes)
	I0419 12:40:46.137526    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem (1078 bytes)
	I0419 12:40:46.137544    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem (1123 bytes)
	I0419 12:40:46.137562    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/key.pem (1679 bytes)
	I0419 12:40:46.137603    9295 certs.go:484] found cert: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem (1708 bytes)
	I0419 12:40:46.137949    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0419 12:40:46.145289    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0419 12:40:46.151698    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0419 12:40:46.158734    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0419 12:40:46.166024    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0419 12:40:46.173607    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0419 12:40:46.179859    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0419 12:40:46.186618    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0419 12:40:46.194010    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0419 12:40:46.200351    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/7304.pem --> /usr/share/ca-certificates/7304.pem (1338 bytes)
	I0419 12:40:46.206587    9295 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/ssl/certs/73042.pem --> /usr/share/ca-certificates/73042.pem (1708 bytes)
	I0419 12:40:46.213775    9295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0419 12:40:46.219015    9295 ssh_runner.go:195] Run: openssl version
	I0419 12:40:46.220878    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73042.pem && ln -fs /usr/share/ca-certificates/73042.pem /etc/ssl/certs/73042.pem"
	I0419 12:40:46.223747    9295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73042.pem
	I0419 12:40:46.225056    9295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 19 19:24 /usr/share/ca-certificates/73042.pem
	I0419 12:40:46.225076    9295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73042.pem
	I0419 12:40:46.226670    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73042.pem /etc/ssl/certs/3ec20f2e.0"
	I0419 12:40:46.229699    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0419 12:40:46.232421    9295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:40:46.233695    9295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 19 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:40:46.233717    9295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0419 12:40:46.235486    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0419 12:40:46.238694    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7304.pem && ln -fs /usr/share/ca-certificates/7304.pem /etc/ssl/certs/7304.pem"
	I0419 12:40:46.241935    9295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7304.pem
	I0419 12:40:46.243330    9295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 19 19:24 /usr/share/ca-certificates/7304.pem
	I0419 12:40:46.243353    9295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7304.pem
	I0419 12:40:46.245149    9295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7304.pem /etc/ssl/certs/51391683.0"
	I0419 12:40:46.248038    9295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0419 12:40:46.249397    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0419 12:40:46.251444    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0419 12:40:46.253216    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0419 12:40:46.254939    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0419 12:40:46.256756    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0419 12:40:46.258384    9295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0419 12:40:46.260124    9295 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-860000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51447 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0419 12:40:46.260185    9295 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 12:40:46.270339    9295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0419 12:40:46.273385    9295 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0419 12:40:46.273391    9295 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0419 12:40:46.273394    9295 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0419 12:40:46.273412    9295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0419 12:40:46.276219    9295 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0419 12:40:46.276533    9295 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-860000" does not appear in /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:40:46.276636    9295 kubeconfig.go:62] /Users/jenkins/minikube-integration/18669-6895/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-860000" cluster setting kubeconfig missing "stopped-upgrade-860000" context setting]
	I0419 12:40:46.276844    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:40:46.277278    9295 kapi.go:59] client config for stopped-upgrade-860000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104737980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:40:46.277591    9295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0419 12:40:46.280328    9295 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-860000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0419 12:40:46.280338    9295 kubeadm.go:1154] stopping kube-system containers ...
	I0419 12:40:46.280376    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0419 12:40:46.293886    9295 docker.go:483] Stopping containers: [5ce705ff4dfe 8129fb0f9c59 1a1ee76d9718 986cd162b7e6 b92c1db2efbd 2ba5461e0d60 21d19188b6ac 87f1b14237b7]
	I0419 12:40:46.293953    9295 ssh_runner.go:195] Run: docker stop 5ce705ff4dfe 8129fb0f9c59 1a1ee76d9718 986cd162b7e6 b92c1db2efbd 2ba5461e0d60 21d19188b6ac 87f1b14237b7
	I0419 12:40:46.309914    9295 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0419 12:40:46.315353    9295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:40:46.318440    9295 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 12:40:46.318446    9295 kubeadm.go:156] found existing configuration files:
	
	I0419 12:40:46.318464    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf
	I0419 12:40:46.321397    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 12:40:46.321420    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:40:46.324025    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf
	I0419 12:40:46.326616    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 12:40:46.326635    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:40:46.329550    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf
	I0419 12:40:46.332008    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 12:40:46.332030    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:40:46.334870    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf
	I0419 12:40:46.338231    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 12:40:46.338283    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:40:46.341537    9295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:40:46.344369    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:46.368894    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.203497    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.329239    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.351967    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0419 12:40:47.376575    9295 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:40:47.376655    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:40:47.878833    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:40:48.378723    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:40:48.388314    9295 api_server.go:72] duration metric: took 1.011758417s to wait for apiserver process to appear ...
	I0419 12:40:48.388333    9295 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:40:48.388343    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:53.390362    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:53.390382    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:40:58.390807    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:40:58.390871    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:03.391348    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:03.391398    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:08.391980    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:08.392003    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:13.392687    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:13.392850    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:18.394157    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:18.394202    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:23.395608    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:23.395678    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:28.396026    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:28.396067    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:33.397819    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:33.397877    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:38.400180    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:38.400227    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:43.401897    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:43.401944    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:48.402248    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:48.402600    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:41:48.442838    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:41:48.442964    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:41:48.462318    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:41:48.462418    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:41:48.476303    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:41:48.476376    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:41:48.488233    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:41:48.488305    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:41:48.499738    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:41:48.499797    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:41:48.510871    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:41:48.510940    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:41:48.521766    9295 logs.go:276] 0 containers: []
	W0419 12:41:48.521776    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:41:48.521831    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:41:48.532899    9295 logs.go:276] 0 containers: []
	W0419 12:41:48.532912    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:41:48.532926    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:41:48.532939    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:41:48.548648    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:41:48.548658    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:41:48.560575    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:41:48.560586    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:41:48.599051    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:41:48.599062    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:41:48.627125    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:41:48.627139    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:41:48.644316    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:41:48.644328    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:41:48.659738    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:41:48.659750    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:41:48.663954    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:41:48.663963    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:41:48.679846    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:41:48.679859    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:41:48.693654    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:41:48.693668    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:41:48.710067    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:41:48.710078    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:41:48.722359    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:41:48.722370    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:41:48.826784    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:41:48.826797    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:41:48.840764    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:41:48.840775    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:41:48.851929    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:41:48.851939    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:41:51.380285    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:41:56.381994    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0419 12:41:56.382415    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:41:56.419869    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:41:56.420053    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:41:56.444517    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:41:56.444608    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:41:56.458258    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:41:56.458333    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:41:56.470132    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:41:56.470201    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:41:56.484974    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:41:56.485044    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:41:56.495382    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:41:56.495453    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:41:56.505801    9295 logs.go:276] 0 containers: []
	W0419 12:41:56.505814    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:41:56.505865    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:41:56.516275    9295 logs.go:276] 0 containers: []
	W0419 12:41:56.516285    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:41:56.516291    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:41:56.516297    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:41:56.552574    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:41:56.552586    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:41:56.566533    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:41:56.566544    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:41:56.603915    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:41:56.603924    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:41:56.621479    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:41:56.621490    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:41:56.635737    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:41:56.635747    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:41:56.661996    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:41:56.662015    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:41:56.687002    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:41:56.687015    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:41:56.700881    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:41:56.700893    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:41:56.714628    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:41:56.714638    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:41:56.731515    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:41:56.731524    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:41:56.743252    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:41:56.743266    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:41:56.747899    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:41:56.747907    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:41:56.759410    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:41:56.759422    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:41:56.774014    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:41:56.774029    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:41:59.290766    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:04.292029    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:04.292272    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:04.320899    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:04.321012    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:04.338222    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:04.338322    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:04.351801    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:04.351878    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:04.363935    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:04.364000    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:04.375615    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:04.375675    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:04.386397    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:04.386462    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:04.396349    9295 logs.go:276] 0 containers: []
	W0419 12:42:04.396362    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:04.396432    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:04.408350    9295 logs.go:276] 0 containers: []
	W0419 12:42:04.408361    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:04.408368    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:04.408372    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:04.422822    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:04.422835    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:04.465797    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:04.465808    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:04.479992    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:04.480004    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:04.491846    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:04.491857    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:04.506914    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:04.506924    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:04.522270    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:04.522281    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:04.533501    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:04.533513    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:04.547801    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:04.547810    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:04.574770    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:04.574782    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:04.612520    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:04.612532    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:04.642138    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:04.642161    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:04.653640    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:04.653652    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:04.671653    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:04.671665    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:04.676166    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:04.676177    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:07.194636    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:12.197169    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:12.197344    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:12.213703    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:12.213784    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:12.229644    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:12.229710    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:12.240357    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:12.240423    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:12.250541    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:12.250600    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:12.260532    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:12.260600    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:12.270468    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:12.270527    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:12.280224    9295 logs.go:276] 0 containers: []
	W0419 12:42:12.280236    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:12.280295    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:12.294788    9295 logs.go:276] 0 containers: []
	W0419 12:42:12.294800    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:12.294808    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:12.294814    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:12.334717    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:12.334731    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:12.360071    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:12.360081    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:12.371517    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:12.371527    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:12.396506    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:12.396514    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:12.407933    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:12.407943    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:12.445127    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:12.445136    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:12.459027    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:12.459037    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:12.472902    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:12.472912    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:12.490504    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:12.490515    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:12.504887    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:12.504898    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:12.508981    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:12.508987    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:12.526994    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:12.527004    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:12.540162    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:12.540174    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:12.555087    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:12.555101    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:15.072167    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:20.073944    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:20.074079    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:20.088701    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:20.088772    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:20.101155    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:20.101227    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:20.117595    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:20.117661    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:20.127961    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:20.128027    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:20.138211    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:20.138268    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:20.148581    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:20.148644    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:20.159451    9295 logs.go:276] 0 containers: []
	W0419 12:42:20.159463    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:20.159517    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:20.174062    9295 logs.go:276] 0 containers: []
	W0419 12:42:20.174073    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:20.174081    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:20.174088    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:20.191170    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:20.191183    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:20.228812    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:20.228822    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:20.244322    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:20.244335    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:20.248640    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:20.248649    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:20.275401    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:20.275414    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:20.290454    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:20.290465    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:20.326916    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:20.326928    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:20.339042    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:20.339056    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:20.352667    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:20.352679    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:20.369599    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:20.369609    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:20.383978    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:20.383989    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:20.409594    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:20.409603    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:20.421040    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:20.421051    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:20.438371    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:20.438386    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:22.954382    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:27.956628    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:27.956718    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:27.969356    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:27.969425    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:27.983989    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:27.984061    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:27.994418    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:27.994485    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:28.004966    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:28.005045    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:28.018251    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:28.018313    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:28.029209    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:28.029277    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:28.038915    9295 logs.go:276] 0 containers: []
	W0419 12:42:28.038928    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:28.038982    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:28.049494    9295 logs.go:276] 0 containers: []
	W0419 12:42:28.049505    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:28.049512    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:28.049519    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:28.069822    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:28.069832    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:28.084253    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:28.084266    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:28.088374    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:28.088380    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:28.124599    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:28.124609    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:28.149584    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:28.149597    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:28.163719    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:28.163729    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:28.175689    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:28.175700    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:28.187270    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:28.187282    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:28.201584    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:28.201595    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:28.216095    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:28.216104    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:28.232006    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:28.232021    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:28.243874    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:28.243889    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:28.267182    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:28.267191    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:28.303906    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:28.303917    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:30.820558    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:35.823139    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:35.823375    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:35.846600    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:35.846690    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:35.860942    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:35.861013    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:35.873344    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:35.873406    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:35.884376    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:35.884446    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:35.895319    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:35.895386    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:35.906038    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:35.906108    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:35.916560    9295 logs.go:276] 0 containers: []
	W0419 12:42:35.916572    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:35.916630    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:35.926668    9295 logs.go:276] 0 containers: []
	W0419 12:42:35.926679    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:35.926688    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:35.926693    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:35.951863    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:35.951876    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:35.967338    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:35.967349    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:35.982702    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:35.982713    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:35.997077    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:35.997087    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:36.015997    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:36.016008    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:36.028819    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:36.028831    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:36.032923    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:36.032932    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:36.067246    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:36.067256    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:36.081354    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:36.081365    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:36.100888    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:36.100901    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:36.118943    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:36.118953    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:36.136375    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:36.136385    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:36.173750    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:36.173758    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:36.185165    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:36.185177    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:38.710831    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:43.713389    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:43.713619    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:43.733774    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:43.733860    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:43.748293    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:43.748362    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:43.762442    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:43.762511    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:43.772964    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:43.773032    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:43.783621    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:43.783683    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:43.794882    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:43.794947    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:43.804779    9295 logs.go:276] 0 containers: []
	W0419 12:42:43.804789    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:43.804840    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:43.814689    9295 logs.go:276] 0 containers: []
	W0419 12:42:43.814703    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:43.814710    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:43.814718    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:43.838964    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:43.838974    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:43.857667    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:43.857678    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:43.876029    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:43.876040    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:43.880337    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:43.880344    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:43.918990    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:43.919001    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:43.933727    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:43.933739    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:43.949176    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:43.949186    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:43.960916    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:43.960928    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:44.000443    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:44.000454    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:44.025163    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:44.025173    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:44.036395    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:44.036409    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:44.047489    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:44.047500    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:44.065256    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:44.065267    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:44.079759    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:44.079770    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:46.599140    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:51.601727    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:51.601862    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:51.613277    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:51.613357    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:51.623351    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:51.623409    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:51.633766    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:51.633832    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:51.649574    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:51.649646    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:51.662197    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:51.662263    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:51.672710    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:51.672787    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:51.682982    9295 logs.go:276] 0 containers: []
	W0419 12:42:51.682993    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:51.683049    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:51.697559    9295 logs.go:276] 0 containers: []
	W0419 12:42:51.697569    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:51.697577    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:51.697583    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:51.709310    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:51.709320    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:51.746110    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:51.746120    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:51.771392    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:51.771403    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:51.783577    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:51.783589    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:51.808490    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:51.808501    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:42:51.812532    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:51.812543    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:51.827773    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:51.827785    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:51.842702    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:51.842711    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:51.857385    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:51.857398    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:51.875303    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:51.875315    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:51.909156    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:51.909167    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:51.946545    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:51.946556    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:51.963899    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:51.963914    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:51.980703    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:51.980719    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:54.496765    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:42:59.499041    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:42:59.499229    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:42:59.515997    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:42:59.516084    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:42:59.529851    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:42:59.529921    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:42:59.541313    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:42:59.541377    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:42:59.551364    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:42:59.551434    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:42:59.562170    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:42:59.562242    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:42:59.572940    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:42:59.573003    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:42:59.583420    9295 logs.go:276] 0 containers: []
	W0419 12:42:59.583432    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:42:59.583488    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:42:59.593181    9295 logs.go:276] 0 containers: []
	W0419 12:42:59.593193    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:42:59.593202    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:42:59.593208    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:42:59.612673    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:42:59.612687    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:42:59.626176    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:42:59.626189    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:42:59.641025    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:42:59.641036    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:42:59.658786    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:42:59.658800    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:42:59.683709    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:42:59.683718    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:42:59.695513    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:42:59.695523    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:42:59.730442    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:42:59.730453    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:42:59.741659    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:42:59.741668    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:42:59.752741    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:42:59.752753    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:42:59.773313    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:42:59.773325    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:42:59.812478    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:42:59.812488    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:42:59.837816    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:42:59.837826    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:42:59.852883    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:42:59.852892    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:42:59.873597    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:42:59.873610    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:02.379783    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:07.382085    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:07.382256    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:07.405011    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:07.405094    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:07.421144    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:07.421221    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:07.432289    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:07.432360    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:07.443027    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:07.443097    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:07.453337    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:07.453402    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:07.465165    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:07.465228    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:07.475146    9295 logs.go:276] 0 containers: []
	W0419 12:43:07.475157    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:07.475212    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:07.485531    9295 logs.go:276] 0 containers: []
	W0419 12:43:07.485543    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:07.485552    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:07.485559    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:07.520394    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:07.520407    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:07.534324    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:07.534338    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:07.549064    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:07.549078    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:07.570265    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:07.570274    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:07.587894    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:07.587904    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:07.610800    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:07.610808    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:07.636833    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:07.636846    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:07.656416    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:07.656430    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:07.668244    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:07.668254    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:07.705377    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:07.705388    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:07.709195    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:07.709205    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:07.722898    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:07.722908    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:07.734136    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:07.734148    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:07.748732    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:07.748742    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:10.263289    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:15.264582    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:15.264897    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:15.300823    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:15.300955    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:15.325150    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:15.325243    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:15.339295    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:15.339360    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:15.354068    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:15.354145    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:15.365083    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:15.365149    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:15.376036    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:15.376099    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:15.387112    9295 logs.go:276] 0 containers: []
	W0419 12:43:15.387125    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:15.387179    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:15.397354    9295 logs.go:276] 0 containers: []
	W0419 12:43:15.397367    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:15.397375    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:15.397382    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:15.434873    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:15.434884    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:15.449039    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:15.449053    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:15.460689    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:15.460701    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:15.482419    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:15.482430    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:15.494996    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:15.495008    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:15.499717    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:15.499725    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:15.524710    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:15.524722    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:15.539695    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:15.539709    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:15.579009    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:15.579019    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:15.594059    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:15.594069    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:15.607659    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:15.607669    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:15.630819    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:15.630827    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:15.648080    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:15.648091    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:15.663320    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:15.663330    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:18.179688    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:23.182032    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:23.182237    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:23.208916    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:23.209049    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:23.226067    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:23.226157    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:23.239315    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:23.239393    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:23.251115    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:23.251190    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:23.261271    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:23.261335    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:23.271791    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:23.271860    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:23.281897    9295 logs.go:276] 0 containers: []
	W0419 12:43:23.281907    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:23.281965    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:23.292014    9295 logs.go:276] 0 containers: []
	W0419 12:43:23.292026    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:23.292033    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:23.292038    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:23.305927    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:23.305941    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:23.322042    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:23.322054    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:23.336024    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:23.336034    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:23.347547    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:23.347559    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:23.363517    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:23.363528    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:23.381411    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:23.381420    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:23.404527    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:23.404546    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:23.408805    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:23.408814    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:23.432307    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:23.432318    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:23.458657    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:23.458667    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:23.476783    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:23.476797    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:23.516626    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:23.516636    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:23.553667    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:23.553683    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:23.565551    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:23.565563    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:26.082937    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:31.083982    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:31.084206    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:31.103842    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:31.103934    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:31.118852    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:31.118933    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:31.130840    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:31.130906    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:31.141646    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:31.141720    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:31.151703    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:31.151760    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:31.162399    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:31.162464    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:31.172769    9295 logs.go:276] 0 containers: []
	W0419 12:43:31.172782    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:31.172835    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:31.183013    9295 logs.go:276] 0 containers: []
	W0419 12:43:31.183023    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:31.183030    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:31.183036    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:31.196955    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:31.196966    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:31.222283    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:31.222293    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:31.245967    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:31.245976    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:31.250447    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:31.250456    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:31.269841    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:31.269854    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:31.284779    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:31.284791    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:31.296443    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:31.296457    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:31.330411    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:31.330425    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:31.355148    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:31.355161    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:31.372006    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:31.372022    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:31.387090    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:31.387101    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:31.399269    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:31.399283    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:31.413977    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:31.413987    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:31.453402    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:31.453411    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:33.969469    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:38.971759    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:38.972180    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:39.012167    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:39.012309    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:39.033327    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:39.033442    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:39.056709    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:39.056786    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:39.068270    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:39.068339    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:39.081923    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:39.081990    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:39.092687    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:39.092756    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:39.109048    9295 logs.go:276] 0 containers: []
	W0419 12:43:39.109058    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:39.109116    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:39.119884    9295 logs.go:276] 0 containers: []
	W0419 12:43:39.119896    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:39.119906    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:39.119941    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:39.145094    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:39.145105    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:39.159581    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:39.159591    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:39.198246    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:39.198255    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:39.203227    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:39.203239    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:39.218027    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:39.218036    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:39.236656    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:39.236668    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:39.251286    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:39.251295    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:39.264189    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:39.264200    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:39.278027    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:39.278043    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:39.289269    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:39.289280    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:39.304081    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:39.304092    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:39.319070    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:39.319082    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:39.331113    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:39.331124    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:39.355552    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:39.355562    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:41.892050    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:46.893848    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:46.894242    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:46.928601    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:46.928727    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:46.948080    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:46.948178    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:46.962739    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:46.962815    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:46.975392    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:46.975463    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:46.990143    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:46.990217    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:47.002991    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:47.003063    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:47.014000    9295 logs.go:276] 0 containers: []
	W0419 12:43:47.014011    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:47.014075    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:47.024280    9295 logs.go:276] 0 containers: []
	W0419 12:43:47.024289    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:47.024298    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:47.024304    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:47.061275    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:47.061285    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:47.065428    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:47.065434    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:47.090085    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:47.090094    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:47.105081    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:47.105093    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:47.117084    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:47.117096    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:47.137460    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:47.137472    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:47.149445    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:47.149456    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:47.166427    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:47.166438    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:47.182396    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:47.182406    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:47.217716    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:47.217729    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:47.231409    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:47.231422    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:47.248937    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:47.248946    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:47.264055    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:47.264064    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:47.278601    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:47.278610    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:49.804211    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:43:54.806628    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:43:54.806860    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:43:54.836164    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:43:54.836282    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:43:54.856237    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:43:54.856309    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:43:54.868574    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:43:54.868641    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:43:54.879858    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:43:54.879930    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:43:54.890557    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:43:54.890623    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:43:54.901086    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:43:54.901151    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:43:54.912620    9295 logs.go:276] 0 containers: []
	W0419 12:43:54.912634    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:43:54.912696    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:43:54.923156    9295 logs.go:276] 0 containers: []
	W0419 12:43:54.923168    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:43:54.923177    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:43:54.923182    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:43:54.937420    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:43:54.937434    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:43:54.952048    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:43:54.952061    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:43:54.969180    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:43:54.969191    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:43:54.993640    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:43:54.993648    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:43:54.998089    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:43:54.998096    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:43:55.023107    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:43:55.023119    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:43:55.038163    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:43:55.038177    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:43:55.053345    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:43:55.053358    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:43:55.064936    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:43:55.064951    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:43:55.078743    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:43:55.078753    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:43:55.089833    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:43:55.089845    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:43:55.116465    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:43:55.116475    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:43:55.127990    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:43:55.128002    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:43:55.166770    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:43:55.166780    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:43:57.701706    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:02.703958    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:02.704304    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:02.734469    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:02.734574    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:02.751827    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:02.751912    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:02.765648    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:02.765709    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:02.777425    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:02.777497    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:02.788891    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:02.788962    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:02.799124    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:02.799201    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:02.809504    9295 logs.go:276] 0 containers: []
	W0419 12:44:02.809517    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:02.809573    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:02.820129    9295 logs.go:276] 0 containers: []
	W0419 12:44:02.820140    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:02.820150    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:02.820156    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:02.855536    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:02.855549    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:02.872581    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:02.872591    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:02.901194    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:02.901203    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:02.913178    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:02.913189    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:02.924542    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:02.924551    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:02.928735    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:02.928745    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:02.943651    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:02.943667    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:02.958298    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:02.958309    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:02.973220    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:02.973230    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:03.012920    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:03.012930    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:03.030123    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:03.030133    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:03.041514    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:03.041525    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:03.055190    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:03.055200    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:03.070359    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:03.070369    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:05.595377    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:10.597516    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:10.597767    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:10.623106    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:10.623210    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:10.639576    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:10.639659    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:10.664430    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:10.664498    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:10.681442    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:10.681522    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:10.694777    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:10.694840    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:10.705653    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:10.705716    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:10.715439    9295 logs.go:276] 0 containers: []
	W0419 12:44:10.715451    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:10.715509    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:10.725527    9295 logs.go:276] 0 containers: []
	W0419 12:44:10.725537    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:10.725545    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:10.725551    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:10.730144    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:10.730150    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:10.744726    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:10.744735    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:10.781708    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:10.781734    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:10.796697    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:10.796713    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:10.818663    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:10.818674    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:10.829782    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:10.829794    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:10.846992    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:10.847002    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:10.865065    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:10.865076    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:10.902830    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:10.902841    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:10.919869    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:10.919880    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:10.944959    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:10.944971    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:10.959743    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:10.959756    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:10.974446    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:10.974458    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:10.992370    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:10.992379    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:13.506512    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:18.508673    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:18.508982    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:18.546129    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:18.546261    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:18.564345    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:18.564436    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:18.577674    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:18.577746    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:18.589837    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:18.589907    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:18.600759    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:18.600830    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:18.612468    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:18.612529    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:18.623171    9295 logs.go:276] 0 containers: []
	W0419 12:44:18.623183    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:18.623240    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:18.633218    9295 logs.go:276] 0 containers: []
	W0419 12:44:18.633228    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:18.633236    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:18.633240    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:18.648127    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:18.648138    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:18.660824    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:18.660835    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:18.675243    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:18.675253    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:18.689485    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:18.689495    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:18.727321    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:18.727338    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:18.740946    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:18.740957    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:18.756148    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:18.756157    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:18.780246    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:18.780261    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:18.799326    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:18.799335    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:18.821381    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:18.821388    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:18.825288    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:18.825297    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:18.837296    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:18.837308    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:18.861773    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:18.861786    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:18.876969    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:18.876984    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:21.415513    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:26.417613    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:26.417811    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:26.433227    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:26.433305    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:26.444212    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:26.444287    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:26.454358    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:26.454427    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:26.464767    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:26.464837    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:26.475300    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:26.475364    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:26.490513    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:26.490587    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:26.501210    9295 logs.go:276] 0 containers: []
	W0419 12:44:26.501221    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:26.501275    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:26.511287    9295 logs.go:276] 0 containers: []
	W0419 12:44:26.511297    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:26.511303    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:26.511308    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:26.525319    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:26.525335    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:26.549006    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:26.549014    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:26.560645    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:26.560659    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:26.599447    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:26.599457    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:26.616325    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:26.616337    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:26.631472    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:26.631481    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:26.642881    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:26.642891    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:26.669732    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:26.669743    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:26.681423    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:26.681434    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:26.698440    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:26.698450    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:26.703003    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:26.703011    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:26.740346    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:26.740357    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:26.755641    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:26.755651    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:26.770472    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:26.770482    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:29.287172    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:34.289297    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:34.289504    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:34.314458    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:34.314560    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:34.330863    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:34.330947    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:34.343968    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:34.344034    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:34.355803    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:34.355869    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:34.366731    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:34.366794    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:34.384775    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:34.384838    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:34.399411    9295 logs.go:276] 0 containers: []
	W0419 12:44:34.399425    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:34.399483    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:34.409781    9295 logs.go:276] 0 containers: []
	W0419 12:44:34.409793    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:34.409801    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:34.409806    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:34.425306    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:34.425317    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:34.439759    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:34.439770    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:34.450953    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:34.450965    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:34.468238    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:34.468247    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:34.490988    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:34.490999    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:34.504128    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:34.504139    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:34.508251    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:34.508257    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:34.532139    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:34.532154    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:34.568897    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:34.568905    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:34.602695    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:34.602705    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:34.620439    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:34.620453    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:34.632323    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:34.632338    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:34.648185    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:34.648195    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:34.663469    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:34.663481    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:37.180117    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:42.182264    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:42.182501    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:44:42.205152    9295 logs.go:276] 2 containers: [f756aa5e6017 986cd162b7e6]
	I0419 12:44:42.205265    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:44:42.221685    9295 logs.go:276] 2 containers: [c80736489828 1a1ee76d9718]
	I0419 12:44:42.221758    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:44:42.234015    9295 logs.go:276] 1 containers: [1d95317838c2]
	I0419 12:44:42.234083    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:44:42.244815    9295 logs.go:276] 2 containers: [d74cc517559d 5ce705ff4dfe]
	I0419 12:44:42.244882    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:44:42.254879    9295 logs.go:276] 1 containers: [b06430c1c43a]
	I0419 12:44:42.254942    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:44:42.265967    9295 logs.go:276] 2 containers: [80f224685a58 8129fb0f9c59]
	I0419 12:44:42.266030    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:44:42.284611    9295 logs.go:276] 0 containers: []
	W0419 12:44:42.284622    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:44:42.284677    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:44:42.295112    9295 logs.go:276] 0 containers: []
	W0419 12:44:42.295128    9295 logs.go:278] No container was found matching "storage-provisioner"
	I0419 12:44:42.295136    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:44:42.295142    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:44:42.334056    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:44:42.334065    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:44:42.338225    9295 logs.go:123] Gathering logs for kube-apiserver [f756aa5e6017] ...
	I0419 12:44:42.338232    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f756aa5e6017"
	I0419 12:44:42.366268    9295 logs.go:123] Gathering logs for kube-apiserver [986cd162b7e6] ...
	I0419 12:44:42.366278    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 986cd162b7e6"
	I0419 12:44:42.399725    9295 logs.go:123] Gathering logs for kube-scheduler [5ce705ff4dfe] ...
	I0419 12:44:42.399735    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce705ff4dfe"
	I0419 12:44:42.414495    9295 logs.go:123] Gathering logs for kube-proxy [b06430c1c43a] ...
	I0419 12:44:42.414505    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b06430c1c43a"
	I0419 12:44:42.431781    9295 logs.go:123] Gathering logs for kube-controller-manager [8129fb0f9c59] ...
	I0419 12:44:42.431790    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8129fb0f9c59"
	I0419 12:44:42.446132    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:44:42.446142    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:44:42.480636    9295 logs.go:123] Gathering logs for coredns [1d95317838c2] ...
	I0419 12:44:42.480650    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d95317838c2"
	I0419 12:44:42.492166    9295 logs.go:123] Gathering logs for kube-controller-manager [80f224685a58] ...
	I0419 12:44:42.492179    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80f224685a58"
	I0419 12:44:42.509217    9295 logs.go:123] Gathering logs for etcd [c80736489828] ...
	I0419 12:44:42.509226    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c80736489828"
	I0419 12:44:42.523034    9295 logs.go:123] Gathering logs for etcd [1a1ee76d9718] ...
	I0419 12:44:42.523047    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a1ee76d9718"
	I0419 12:44:42.538864    9295 logs.go:123] Gathering logs for kube-scheduler [d74cc517559d] ...
	I0419 12:44:42.538879    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74cc517559d"
	I0419 12:44:42.554004    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:44:42.554019    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:44:42.577872    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:44:42.577880    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:44:45.091469    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:50.092411    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:44:50.092482    9295 kubeadm.go:591] duration metric: took 4m3.824481792s to restartPrimaryControlPlane
	W0419 12:44:50.092555    9295 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0419 12:44:50.092579    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0419 12:44:51.059240    9295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0419 12:44:51.064313    9295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0419 12:44:51.067085    9295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0419 12:44:51.069861    9295 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0419 12:44:51.069868    9295 kubeadm.go:156] found existing configuration files:
	
	I0419 12:44:51.069887    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf
	I0419 12:44:51.072475    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0419 12:44:51.072503    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0419 12:44:51.074814    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf
	I0419 12:44:51.077856    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0419 12:44:51.077880    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0419 12:44:51.081095    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf
	I0419 12:44:51.083494    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0419 12:44:51.083516    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0419 12:44:51.086246    9295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf
	I0419 12:44:51.089336    9295 kubeadm.go:162] "https://control-plane.minikube.internal:51447" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51447 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0419 12:44:51.089358    9295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0419 12:44:51.091896    9295 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0419 12:44:51.109036    9295 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0419 12:44:51.109094    9295 kubeadm.go:309] [preflight] Running pre-flight checks
	I0419 12:44:51.158840    9295 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0419 12:44:51.158909    9295 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0419 12:44:51.158962    9295 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0419 12:44:51.210975    9295 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0419 12:44:51.214258    9295 out.go:204]   - Generating certificates and keys ...
	I0419 12:44:51.214292    9295 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0419 12:44:51.214339    9295 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0419 12:44:51.214436    9295 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0419 12:44:51.214500    9295 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0419 12:44:51.214535    9295 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0419 12:44:51.214567    9295 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0419 12:44:51.214595    9295 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0419 12:44:51.214680    9295 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0419 12:44:51.214752    9295 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0419 12:44:51.214845    9295 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0419 12:44:51.214884    9295 kubeadm.go:309] [certs] Using the existing "sa" key
	I0419 12:44:51.214915    9295 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0419 12:44:51.356534    9295 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0419 12:44:51.416182    9295 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0419 12:44:51.452681    9295 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0419 12:44:51.553146    9295 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0419 12:44:51.582227    9295 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0419 12:44:51.583372    9295 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0419 12:44:51.583395    9295 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0419 12:44:51.656274    9295 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0419 12:44:51.661698    9295 out.go:204]   - Booting up control plane ...
	I0419 12:44:51.661753    9295 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0419 12:44:51.661790    9295 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0419 12:44:51.661820    9295 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0419 12:44:51.661858    9295 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0419 12:44:51.661953    9295 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0419 12:44:56.664279    9295 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.007730 seconds
	I0419 12:44:56.664453    9295 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0419 12:44:56.677965    9295 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0419 12:44:57.190534    9295 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0419 12:44:57.190663    9295 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-860000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0419 12:44:57.704246    9295 kubeadm.go:309] [bootstrap-token] Using token: pmip4s.5q42x0gk1u9qbqk8
	I0419 12:44:57.708466    9295 out.go:204]   - Configuring RBAC rules ...
	I0419 12:44:57.708595    9295 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0419 12:44:57.709267    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0419 12:44:57.715852    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0419 12:44:57.718027    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0419 12:44:57.719926    9295 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0419 12:44:57.721647    9295 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0419 12:44:57.728492    9295 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0419 12:44:57.867805    9295 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0419 12:44:58.111755    9295 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0419 12:44:58.112342    9295 kubeadm.go:309] 
	I0419 12:44:58.112371    9295 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0419 12:44:58.112374    9295 kubeadm.go:309] 
	I0419 12:44:58.112422    9295 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0419 12:44:58.112429    9295 kubeadm.go:309] 
	I0419 12:44:58.112444    9295 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0419 12:44:58.112477    9295 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0419 12:44:58.112502    9295 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0419 12:44:58.112512    9295 kubeadm.go:309] 
	I0419 12:44:58.112541    9295 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0419 12:44:58.112545    9295 kubeadm.go:309] 
	I0419 12:44:58.112579    9295 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0419 12:44:58.112586    9295 kubeadm.go:309] 
	I0419 12:44:58.112614    9295 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0419 12:44:58.112654    9295 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0419 12:44:58.112692    9295 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0419 12:44:58.112697    9295 kubeadm.go:309] 
	I0419 12:44:58.112738    9295 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0419 12:44:58.112778    9295 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0419 12:44:58.112783    9295 kubeadm.go:309] 
	I0419 12:44:58.112821    9295 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pmip4s.5q42x0gk1u9qbqk8 \
	I0419 12:44:58.112879    9295 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 \
	I0419 12:44:58.112892    9295 kubeadm.go:309] 	--control-plane 
	I0419 12:44:58.112897    9295 kubeadm.go:309] 
	I0419 12:44:58.112936    9295 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0419 12:44:58.112943    9295 kubeadm.go:309] 
	I0419 12:44:58.112992    9295 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pmip4s.5q42x0gk1u9qbqk8 \
	I0419 12:44:58.113069    9295 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:43bc0efc3f284da6029f4e6dabe908f0c23cb1fa613a356d9709456ef7f07973 
	I0419 12:44:58.113398    9295 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0419 12:44:58.113414    9295 cni.go:84] Creating CNI manager for ""
	I0419 12:44:58.113425    9295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:44:58.117130    9295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0419 12:44:58.124424    9295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0419 12:44:58.127404    9295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0419 12:44:58.132072    9295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0419 12:44:58.132116    9295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0419 12:44:58.132160    9295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-860000 minikube.k8s.io/updated_at=2024_04_19T12_44_58_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=08394f7216638b47bd5fa7f34baee3de2320d73b minikube.k8s.io/name=stopped-upgrade-860000 minikube.k8s.io/primary=true
	I0419 12:44:58.135255    9295 ops.go:34] apiserver oom_adj: -16
	I0419 12:44:58.175214    9295 kubeadm.go:1107] duration metric: took 43.133042ms to wait for elevateKubeSystemPrivileges
	W0419 12:44:58.175232    9295 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0419 12:44:58.175235    9295 kubeadm.go:393] duration metric: took 4m11.920693666s to StartCluster
	I0419 12:44:58.175244    9295 settings.go:142] acquiring lock: {Name:mkc28392d1c267200804e17c319a937f73acc262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:44:58.175325    9295 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:44:58.175728    9295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/kubeconfig: {Name:mkd215d166854846254d417d030271f915e1c7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:44:58.175924    9295 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:44:58.180299    9295 out.go:177] * Verifying Kubernetes components...
	I0419 12:44:58.175939    9295 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0419 12:44:58.176023    9295 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:44:58.188287    9295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0419 12:44:58.188289    9295 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-860000"
	I0419 12:44:58.188303    9295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-860000"
	I0419 12:44:58.188284    9295 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-860000"
	I0419 12:44:58.188318    9295 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-860000"
	W0419 12:44:58.188328    9295 addons.go:243] addon storage-provisioner should already be in state true
	I0419 12:44:58.188350    9295 host.go:66] Checking if "stopped-upgrade-860000" exists ...
	I0419 12:44:58.188753    9295 retry.go:31] will retry after 960.455837ms: connect: dial unix /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/monitor: connect: connection refused
	I0419 12:44:58.192199    9295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0419 12:44:58.196318    9295 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:44:58.196325    9295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0419 12:44:58.196335    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:44:58.266667    9295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0419 12:44:58.272183    9295 api_server.go:52] waiting for apiserver process to appear ...
	I0419 12:44:58.272221    9295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0419 12:44:58.275756    9295 api_server.go:72] duration metric: took 99.824416ms to wait for apiserver process to appear ...
	I0419 12:44:58.275765    9295 api_server.go:88] waiting for apiserver healthz status ...
	I0419 12:44:58.275773    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:44:58.329092    9295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0419 12:44:59.152853    9295 kapi.go:59] client config for stopped-upgrade-860000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/stopped-upgrade-860000/client.key", CAFile:"/Users/jenkins/minikube-integration/18669-6895/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104737980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0419 12:44:59.153151    9295 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-860000"
	W0419 12:44:59.153160    9295 addons.go:243] addon default-storageclass should already be in state true
	I0419 12:44:59.153178    9295 host.go:66] Checking if "stopped-upgrade-860000" exists ...
	I0419 12:44:59.154253    9295 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0419 12:44:59.154263    9295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0419 12:44:59.154272    9295 sshutil.go:53] new ssh client: &{IP:localhost Port:51412 SSHKeyPath:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/stopped-upgrade-860000/id_rsa Username:docker}
	I0419 12:44:59.191633    9295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0419 12:45:03.277937    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:03.278017    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:08.278543    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:08.278568    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:13.278953    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:13.278979    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:18.279462    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:18.279509    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:23.280260    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:23.280338    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:28.281127    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:28.281177    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0419 12:45:29.243496    9295 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0419 12:45:29.246449    9295 out.go:177] * Enabled addons: storage-provisioner
	I0419 12:45:29.259122    9295 addons.go:505] duration metric: took 31.08388125s for enable addons: enabled=[storage-provisioner]
	I0419 12:45:33.282134    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:33.282178    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:38.282540    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:38.282565    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:43.283984    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:43.284004    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:48.285911    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:48.285955    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:53.287062    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:53.287146    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:45:58.289612    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:45:58.289746    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:45:58.301499    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:45:58.301571    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:45:58.311948    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:45:58.312014    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:45:58.322587    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:45:58.322644    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:45:58.333472    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:45:58.333529    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:45:58.344081    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:45:58.344138    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:45:58.354105    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:45:58.354175    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:45:58.364518    9295 logs.go:276] 0 containers: []
	W0419 12:45:58.364527    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:45:58.364586    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:45:58.374657    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:45:58.374672    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:45:58.374676    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:45:58.399638    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:45:58.399646    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:45:58.410518    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:45:58.410530    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:45:58.445052    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:45:58.445060    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:45:58.481614    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:45:58.481626    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:45:58.493157    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:45:58.493170    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:45:58.504840    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:45:58.504853    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:45:58.519948    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:45:58.519960    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:45:58.531729    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:45:58.531743    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:45:58.535805    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:45:58.535813    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:45:58.550586    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:45:58.550599    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:45:58.565383    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:45:58.565394    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:45:58.577564    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:45:58.577576    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:01.096537    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:06.099322    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:06.099757    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:06.137002    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:06.137129    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:06.158598    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:06.158684    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:06.173598    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:06.173667    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:06.188191    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:06.188264    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:06.198999    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:06.199059    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:06.209456    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:06.209516    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:06.219974    9295 logs.go:276] 0 containers: []
	W0419 12:46:06.219987    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:06.220042    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:06.230260    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:06.230275    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:06.230280    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:06.244492    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:06.244504    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:06.255921    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:06.255935    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:06.270776    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:06.270786    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:06.295278    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:06.295290    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:06.318025    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:06.318032    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:06.328906    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:06.328915    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:06.340412    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:06.340427    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:06.376711    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:06.376729    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:06.381871    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:06.381880    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:06.421211    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:06.421226    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:06.436794    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:06.436809    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:06.450178    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:06.450193    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:08.966879    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:13.967897    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:13.968248    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:13.998865    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:13.998972    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:14.017937    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:14.018020    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:14.032101    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:14.032180    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:14.047711    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:14.047789    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:14.062811    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:14.062870    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:14.074421    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:14.074483    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:14.085182    9295 logs.go:276] 0 containers: []
	W0419 12:46:14.085194    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:14.085243    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:14.097926    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:14.097940    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:14.097946    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:14.113524    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:14.113537    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:14.127949    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:14.127961    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:14.139746    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:14.139757    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:14.158460    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:14.158467    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:14.170101    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:14.170118    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:14.182436    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:14.182445    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:14.216941    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:14.216949    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:14.221562    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:14.221568    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:14.233112    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:14.233120    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:14.250668    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:14.250677    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:14.274859    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:14.274868    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:14.308739    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:14.308751    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:16.822673    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:21.825328    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:21.825728    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:21.871956    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:21.872063    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:21.888459    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:21.888544    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:21.901596    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:21.901663    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:21.915660    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:21.915723    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:21.925827    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:21.925884    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:21.936229    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:21.936284    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:21.951462    9295 logs.go:276] 0 containers: []
	W0419 12:46:21.951474    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:21.951533    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:21.961959    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:21.961972    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:21.961978    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:21.974528    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:21.974544    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:21.978805    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:21.978814    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:21.993313    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:21.993326    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:22.007193    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:22.007206    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:22.018357    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:22.018365    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:22.029726    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:22.029737    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:22.047267    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:22.047276    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:22.070108    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:22.070115    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:22.103616    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:22.103623    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:22.137109    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:22.137122    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:22.152605    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:22.152617    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:22.164214    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:22.164227    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:24.678910    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:29.680755    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:29.681069    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:29.709254    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:29.709379    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:29.727262    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:29.727350    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:29.741210    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:29.741281    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:29.756509    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:29.756576    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:29.767158    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:29.767227    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:29.778023    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:29.778088    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:29.787614    9295 logs.go:276] 0 containers: []
	W0419 12:46:29.787624    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:29.787673    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:29.798054    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:29.798069    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:29.798074    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:29.833069    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:29.833080    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:29.847362    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:29.847373    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:29.862043    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:29.862054    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:29.884842    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:29.884853    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:29.897971    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:29.897981    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:29.902268    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:29.902276    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:29.939860    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:29.939874    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:29.952121    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:29.952132    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:29.965164    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:29.965176    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:29.980135    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:29.980148    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:29.991702    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:29.991713    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:30.002963    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:30.002974    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:32.528938    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:37.529754    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:37.530189    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:37.568686    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:37.568819    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:37.590316    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:37.590414    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:37.606172    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:37.606249    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:37.619107    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:37.619179    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:37.631102    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:37.631169    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:37.641263    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:37.641321    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:37.651210    9295 logs.go:276] 0 containers: []
	W0419 12:46:37.651219    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:37.651266    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:37.662064    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:37.662079    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:37.662084    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:37.675854    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:37.675865    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:37.687946    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:37.687959    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:37.700144    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:37.700155    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:37.724378    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:37.724387    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:37.748274    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:37.748285    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:37.782059    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:37.782068    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:37.786175    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:37.786184    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:37.797769    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:37.797781    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:37.812631    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:37.812644    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:37.825941    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:37.825956    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:37.837932    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:37.837945    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:37.882431    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:37.882443    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:40.399264    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:45.401325    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:45.401760    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:45.443365    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:45.443494    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:45.464069    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:45.464176    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:45.479607    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:45.479680    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:45.491857    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:45.491924    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:45.502571    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:45.502640    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:45.513125    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:45.513191    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:45.523535    9295 logs.go:276] 0 containers: []
	W0419 12:46:45.523547    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:45.523605    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:45.534260    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:45.534274    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:45.534281    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:45.568001    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:45.568012    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:45.582163    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:45.582176    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:45.596096    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:45.596109    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:45.607466    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:45.607478    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:45.622379    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:45.622392    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:45.634431    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:45.634442    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:45.667941    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:45.667947    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:45.672050    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:45.672056    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:45.704092    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:45.704098    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:45.715316    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:45.715330    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:45.727153    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:45.727166    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:45.738983    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:45.738992    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:48.258474    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:46:53.260849    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:46:53.261162    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:46:53.302177    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:46:53.302297    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:46:53.324605    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:46:53.324715    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:46:53.340171    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:46:53.340239    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:46:53.352477    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:46:53.352546    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:46:53.371744    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:46:53.371812    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:46:53.382575    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:46:53.382638    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:46:53.392951    9295 logs.go:276] 0 containers: []
	W0419 12:46:53.392965    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:46:53.393024    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:46:53.403300    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:46:53.403318    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:46:53.403322    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:46:53.426901    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:46:53.426907    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:46:53.461056    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:46:53.461064    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:46:53.465486    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:46:53.465495    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:46:53.477046    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:46:53.477056    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:46:53.489317    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:46:53.489328    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:46:53.503966    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:46:53.503976    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:46:53.521626    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:46:53.521636    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:46:53.532996    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:46:53.533009    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:46:53.567470    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:46:53.567481    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:46:53.582032    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:46:53.582045    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:46:53.595945    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:46:53.595958    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:46:53.607118    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:46:53.607129    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:46:56.121875    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:01.124332    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:01.124587    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:01.151465    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:01.151579    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:01.172707    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:01.172777    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:01.185313    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:47:01.185403    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:01.198158    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:01.198227    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:01.208710    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:01.208768    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:01.219341    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:01.219405    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:01.229634    9295 logs.go:276] 0 containers: []
	W0419 12:47:01.229644    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:01.229694    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:01.239390    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:01.239406    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:01.239411    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:01.250585    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:01.250599    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:01.267714    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:01.267723    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:01.301687    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:01.301695    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:01.336654    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:01.336665    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:01.350883    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:01.350896    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:01.368572    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:01.368582    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:01.380541    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:01.380552    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:01.407012    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:01.407025    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:01.429837    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:01.429844    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:01.440618    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:01.440631    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:01.445119    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:01.445127    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:01.456806    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:01.456818    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:03.970078    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:08.972598    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:08.973042    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:09.013316    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:09.013437    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:09.039589    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:09.039675    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:09.054099    9295 logs.go:276] 2 containers: [c96e132f14af 7b8874da9c2e]
	I0419 12:47:09.054161    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:09.067917    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:09.067974    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:09.077968    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:09.078023    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:09.088351    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:09.088421    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:09.098405    9295 logs.go:276] 0 containers: []
	W0419 12:47:09.098415    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:09.098471    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:09.108678    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:09.108692    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:09.108697    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:09.120401    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:09.120412    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:09.143236    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:09.143244    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:09.154233    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:09.154243    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:09.187702    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:09.187710    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:09.221726    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:09.221741    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:09.236012    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:09.236025    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:09.247135    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:09.247147    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:09.259632    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:09.259644    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:09.264037    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:09.264045    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:09.277853    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:09.277865    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:09.289165    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:09.289177    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:09.303651    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:09.303662    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:11.823153    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:16.825345    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:16.825793    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:16.862690    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:16.862818    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:16.883407    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:16.883497    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:16.901953    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:47:16.902026    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:16.913605    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:16.913662    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:16.924168    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:16.924228    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:16.934878    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:16.934936    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:16.944831    9295 logs.go:276] 0 containers: []
	W0419 12:47:16.944849    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:16.944894    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:16.955178    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:16.955193    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:16.955198    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:16.966378    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:16.966386    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:16.977971    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:16.977985    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:17.013080    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:17.013086    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:17.017420    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:17.017428    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:17.032054    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:47:17.032065    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:47:17.043059    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:17.043069    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:17.054771    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:17.054781    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:17.065585    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:17.065597    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:17.079469    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:17.079480    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:17.090775    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:17.090789    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:17.127894    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:47:17.127904    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:47:17.139982    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:17.139992    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:17.162125    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:17.162137    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:17.178689    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:17.178700    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:19.704359    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:24.706541    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:24.706931    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:24.748494    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:24.748609    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:24.771306    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:24.771395    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:24.786628    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:47:24.786703    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:24.799266    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:24.799324    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:24.810099    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:24.810166    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:24.822408    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:24.822476    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:24.835269    9295 logs.go:276] 0 containers: []
	W0419 12:47:24.835280    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:24.835328    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:24.845721    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:24.845742    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:47:24.845748    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:47:24.858536    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:24.858549    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:24.870436    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:24.870448    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:24.882522    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:24.882535    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:24.899801    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:24.899809    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:24.904670    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:24.904677    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:24.938067    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:24.938081    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:24.952507    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:47:24.952518    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:47:24.964543    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:24.964552    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:24.988958    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:24.988966    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:25.000494    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:25.000505    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:25.019335    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:25.019349    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:25.054135    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:25.054143    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:25.069830    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:25.069844    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:25.081774    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:25.081784    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:27.598710    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:32.601345    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:32.601793    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:32.642377    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:32.642505    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:32.663672    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:32.663778    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:32.679466    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:47:32.679543    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:32.695234    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:32.695298    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:32.706881    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:32.706936    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:32.717548    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:32.717612    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:32.727516    9295 logs.go:276] 0 containers: []
	W0419 12:47:32.727531    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:32.727582    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:32.737770    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:32.737787    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:32.737792    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:32.771178    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:32.771186    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:32.788704    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:32.788714    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:32.807170    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:32.807182    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:32.832281    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:32.832291    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:32.866695    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:47:32.866708    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:47:32.878892    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:32.878906    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:32.890152    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:32.890162    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:32.901912    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:32.901923    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:32.905981    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:32.905990    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:32.919536    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:32.919549    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:32.930991    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:32.931001    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:32.945735    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:32.945747    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:32.962998    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:47:32.963008    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:47:32.980028    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:32.980038    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:35.491803    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:40.494086    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:40.494499    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:40.533149    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:40.533274    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:40.554673    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:40.554764    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:40.571939    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:47:40.572015    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:40.585080    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:40.585136    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:40.595834    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:40.595905    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:40.606378    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:40.606444    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:40.616626    9295 logs.go:276] 0 containers: []
	W0419 12:47:40.616639    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:40.616694    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:40.627252    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:40.627270    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:40.627275    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:40.663224    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:40.663238    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:40.680819    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:40.680829    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:40.692445    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:40.692455    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:40.726052    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:40.726061    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:40.737153    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:40.737164    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:40.752480    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:40.752489    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:40.764134    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:40.764146    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:40.776139    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:40.776149    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:40.791922    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:40.791935    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:40.806665    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:47:40.806677    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:47:40.820873    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:40.820887    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:40.841916    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:40.841927    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:40.846182    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:40.846190    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:40.869309    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:47:40.869332    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:47:43.382947    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:48.384193    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:48.384257    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:48.397360    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:48.397429    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:48.408911    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:48.408955    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:48.420846    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:47:48.420931    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:48.432279    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:48.432337    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:48.443599    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:48.443674    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:48.454638    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:48.454709    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:48.465049    9295 logs.go:276] 0 containers: []
	W0419 12:47:48.465058    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:48.465107    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:48.475801    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:48.475817    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:48.475824    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:48.480245    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:48.480255    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:48.518210    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:48.518223    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:48.530763    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:48.530774    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:48.556062    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:47:48.556069    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:47:48.567712    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:48.567721    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:48.585911    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:48.585921    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:48.599401    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:48.599411    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:48.611541    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:48.611550    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:48.623417    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:48.623427    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:48.660991    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:48.661018    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:48.677007    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:48.677020    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:48.693647    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:47:48.693665    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:47:48.707598    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:48.707613    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:48.720612    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:48.720627    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:51.239227    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:47:56.241876    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:47:56.241978    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:47:56.254111    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:47:56.254175    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:47:56.264113    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:47:56.264163    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:47:56.274750    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:47:56.274816    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:47:56.285132    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:47:56.285187    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:47:56.294901    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:47:56.294959    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:47:56.305396    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:47:56.305458    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:47:56.315932    9295 logs.go:276] 0 containers: []
	W0419 12:47:56.315947    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:47:56.315998    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:47:56.326520    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:47:56.326542    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:47:56.326547    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:47:56.360209    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:47:56.360223    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:47:56.372087    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:47:56.372099    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:47:56.383719    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:47:56.383732    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:47:56.407163    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:47:56.407174    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:47:56.421554    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:47:56.421567    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:47:56.436791    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:47:56.436800    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:47:56.452369    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:47:56.452378    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:47:56.470772    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:47:56.470781    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:47:56.483127    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:47:56.483138    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:47:56.495072    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:47:56.495083    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:47:56.506629    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:47:56.506638    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:47:56.539660    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:47:56.539667    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:47:56.544204    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:47:56.544210    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:47:56.557943    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:47:56.557953    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:47:59.070346    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:04.072573    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:04.073031    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:04.110154    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:04.110280    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:04.131322    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:04.131420    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:04.146569    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:04.146634    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:04.159561    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:04.159619    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:04.170403    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:04.170470    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:04.181819    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:04.181883    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:04.192750    9295 logs.go:276] 0 containers: []
	W0419 12:48:04.192762    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:04.192805    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:04.202954    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:04.202971    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:04.202976    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:04.215169    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:04.215190    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:04.227073    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:04.227085    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:04.238506    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:04.238518    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:04.253876    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:04.253888    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:04.258280    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:04.258289    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:04.271838    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:04.271849    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:04.284525    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:04.284536    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:04.296454    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:04.296466    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:04.308535    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:04.308547    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:04.326220    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:04.326229    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:04.342131    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:04.342144    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:04.366527    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:04.366537    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:04.401130    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:04.401140    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:04.435067    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:04.435077    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:06.954241    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:11.956831    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:11.956900    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:11.967884    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:11.967945    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:11.979725    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:11.979781    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:11.993719    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:11.993787    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:12.005114    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:12.005165    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:12.015698    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:12.015748    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:12.027669    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:12.027718    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:12.039961    9295 logs.go:276] 0 containers: []
	W0419 12:48:12.039972    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:12.040013    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:12.052053    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:12.052067    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:12.052072    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:12.064849    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:12.064860    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:12.078168    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:12.078181    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:12.094360    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:12.094371    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:12.112241    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:12.112255    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:12.125405    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:12.125415    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:12.160422    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:12.160435    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:12.166612    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:12.166628    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:12.208846    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:12.208858    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:12.227403    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:12.227411    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:12.246196    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:12.246208    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:12.262455    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:12.262468    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:12.275019    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:12.275031    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:12.292599    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:12.292615    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:12.306099    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:12.306109    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:14.832344    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:19.835398    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:19.835793    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:19.870046    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:19.870152    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:19.892823    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:19.892910    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:19.908111    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:19.908194    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:19.921303    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:19.921374    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:19.933533    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:19.933616    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:19.948274    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:19.948348    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:19.961220    9295 logs.go:276] 0 containers: []
	W0419 12:48:19.961231    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:19.961286    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:19.973633    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:19.973657    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:19.973663    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:19.978406    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:19.978419    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:19.991820    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:19.991833    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:20.005933    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:20.005945    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:20.023134    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:20.023151    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:20.039079    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:20.039092    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:20.053724    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:20.053735    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:20.067086    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:20.067096    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:20.079006    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:20.079015    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:20.097574    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:20.097586    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:20.109819    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:20.109830    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:20.144212    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:20.144222    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:20.179570    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:20.179583    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:20.202804    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:20.202810    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:20.214251    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:20.214262    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:22.727675    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:27.728964    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:27.729306    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:27.776489    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:27.776636    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:27.797378    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:27.797464    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:27.812231    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:27.812304    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:27.826049    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:27.826115    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:27.836716    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:27.836774    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:27.846976    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:27.847045    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:27.857605    9295 logs.go:276] 0 containers: []
	W0419 12:48:27.857614    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:27.857668    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:27.868004    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:27.868020    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:27.868025    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:27.872506    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:27.872514    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:27.884461    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:27.884473    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:27.899385    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:27.899398    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:27.911093    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:27.911104    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:27.945352    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:27.945362    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:27.980754    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:27.980768    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:27.994626    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:27.994639    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:28.006269    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:28.006283    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:28.017577    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:28.017589    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:28.029105    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:28.029119    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:28.040808    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:28.040821    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:28.058836    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:28.058847    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:28.071569    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:28.071578    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:28.086099    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:28.086114    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:30.612416    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:35.614749    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:35.614837    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:35.628518    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:35.628593    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:35.652813    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:35.652878    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:35.665490    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:35.665572    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:35.676920    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:35.676998    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:35.688346    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:35.688399    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:35.700063    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:35.700131    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:35.711147    9295 logs.go:276] 0 containers: []
	W0419 12:48:35.711161    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:35.711218    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:35.722575    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:35.722594    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:35.722599    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:35.738376    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:35.738387    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:35.757086    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:35.757096    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:35.770640    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:35.770653    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:35.806984    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:35.806996    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:35.820746    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:35.820759    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:35.835910    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:35.835921    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:35.848975    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:35.848984    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:35.860828    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:35.860841    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:35.889614    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:35.889623    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:35.924163    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:35.924184    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:35.928992    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:35.929002    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:35.945359    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:35.945370    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:35.957851    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:35.957890    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:35.972501    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:35.972513    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:38.498727    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:43.499704    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:43.500122    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:43.536972    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:43.537104    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:43.558524    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:43.558612    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:43.573821    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:43.573895    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:43.586451    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:43.586517    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:43.597311    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:43.597376    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:43.608057    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:43.608126    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:43.618700    9295 logs.go:276] 0 containers: []
	W0419 12:48:43.618709    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:43.618759    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:43.633283    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:43.633300    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:43.633305    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:43.637593    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:43.637600    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:43.652326    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:43.652339    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:43.668631    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:43.668643    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:43.702286    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:43.702294    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:43.716932    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:43.716945    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:43.751332    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:43.751346    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:43.763566    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:43.763580    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:43.775740    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:43.775753    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:43.793149    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:43.793161    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:43.807270    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:43.807284    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:43.820760    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:43.820778    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:43.833565    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:43.833578    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:43.856549    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:43.856556    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:43.868161    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:43.868173    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:46.382001    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:51.384737    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:51.385050    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0419 12:48:51.418278    9295 logs.go:276] 1 containers: [d9a0291ff5bd]
	I0419 12:48:51.418387    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0419 12:48:51.438299    9295 logs.go:276] 1 containers: [a5af086441f1]
	I0419 12:48:51.438398    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0419 12:48:51.452670    9295 logs.go:276] 4 containers: [bc17caa7fd02 de125cf822c2 c96e132f14af 7b8874da9c2e]
	I0419 12:48:51.452747    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0419 12:48:51.464413    9295 logs.go:276] 1 containers: [f48019bdacb7]
	I0419 12:48:51.464481    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0419 12:48:51.475266    9295 logs.go:276] 1 containers: [6327e13cc1c8]
	I0419 12:48:51.475345    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0419 12:48:51.486159    9295 logs.go:276] 1 containers: [20bc0fbff364]
	I0419 12:48:51.486228    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0419 12:48:51.496300    9295 logs.go:276] 0 containers: []
	W0419 12:48:51.496311    9295 logs.go:278] No container was found matching "kindnet"
	I0419 12:48:51.496372    9295 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0419 12:48:51.506895    9295 logs.go:276] 1 containers: [ee19da475ebb]
	I0419 12:48:51.506910    9295 logs.go:123] Gathering logs for coredns [c96e132f14af] ...
	I0419 12:48:51.506915    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96e132f14af"
	I0419 12:48:51.518379    9295 logs.go:123] Gathering logs for kube-scheduler [f48019bdacb7] ...
	I0419 12:48:51.518391    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f48019bdacb7"
	I0419 12:48:51.533591    9295 logs.go:123] Gathering logs for describe nodes ...
	I0419 12:48:51.533600    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0419 12:48:51.568745    9295 logs.go:123] Gathering logs for kube-apiserver [d9a0291ff5bd] ...
	I0419 12:48:51.568759    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9a0291ff5bd"
	I0419 12:48:51.583973    9295 logs.go:123] Gathering logs for coredns [bc17caa7fd02] ...
	I0419 12:48:51.583986    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc17caa7fd02"
	I0419 12:48:51.595786    9295 logs.go:123] Gathering logs for coredns [de125cf822c2] ...
	I0419 12:48:51.595797    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de125cf822c2"
	I0419 12:48:51.607001    9295 logs.go:123] Gathering logs for coredns [7b8874da9c2e] ...
	I0419 12:48:51.607014    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8874da9c2e"
	I0419 12:48:51.618995    9295 logs.go:123] Gathering logs for kube-proxy [6327e13cc1c8] ...
	I0419 12:48:51.619007    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6327e13cc1c8"
	I0419 12:48:51.630648    9295 logs.go:123] Gathering logs for kubelet ...
	I0419 12:48:51.630659    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0419 12:48:51.666082    9295 logs.go:123] Gathering logs for etcd [a5af086441f1] ...
	I0419 12:48:51.666090    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5af086441f1"
	I0419 12:48:51.680296    9295 logs.go:123] Gathering logs for Docker ...
	I0419 12:48:51.680307    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0419 12:48:51.704839    9295 logs.go:123] Gathering logs for dmesg ...
	I0419 12:48:51.704846    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0419 12:48:51.708874    9295 logs.go:123] Gathering logs for kube-controller-manager [20bc0fbff364] ...
	I0419 12:48:51.708882    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20bc0fbff364"
	I0419 12:48:51.725828    9295 logs.go:123] Gathering logs for storage-provisioner [ee19da475ebb] ...
	I0419 12:48:51.725839    9295 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee19da475ebb"
	I0419 12:48:51.739340    9295 logs.go:123] Gathering logs for container status ...
	I0419 12:48:51.739351    9295 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0419 12:48:54.252815    9295 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0419 12:48:59.255483    9295 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0419 12:48:59.261560    9295 out.go:177] 
	W0419 12:48:59.265572    9295 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0419 12:48:59.265595    9295 out.go:239] * 
	* 
	W0419 12:48:59.267593    9295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:48:59.279461    9295 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-860000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.45s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-598000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-598000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.835035667s)

                                                
                                                
-- stdout --
	* [pause-598000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-598000" primary control-plane node in "pause-598000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-598000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-598000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-598000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-598000 -n pause-598000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-598000 -n pause-598000: exit status 7 (31.3725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-598000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-537000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-537000 --driver=qemu2 : exit status 80 (9.745485833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-537000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-537000" primary control-plane node in "NoKubernetes-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-537000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000: exit status 7 (52.911334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --driver=qemu2 : exit status 80 (5.311659125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-537000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-537000
	* Restarting existing qemu2 VM for "NoKubernetes-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000: exit status 7 (35.997459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --driver=qemu2 : exit status 80 (5.255779958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-537000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-537000
	* Restarting existing qemu2 VM for "NoKubernetes-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000: exit status 7 (68.533708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-537000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-537000 --driver=qemu2 : exit status 80 (5.265245709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-537000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-537000
	* Restarting existing qemu2 VM for "NoKubernetes-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-537000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-537000 -n NoKubernetes-537000: exit status 7 (69.67025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.756272209s)

                                                
                                                
-- stdout --
	* [auto-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-342000" primary control-plane node in "auto-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:46:59.636503    9527 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:46:59.636765    9527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:46:59.636769    9527 out.go:304] Setting ErrFile to fd 2...
	I0419 12:46:59.636771    9527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:46:59.636917    9527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:46:59.638139    9527 out.go:298] Setting JSON to false
	I0419 12:46:59.654874    9527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6390,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:46:59.654939    9527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:46:59.661393    9527 out.go:177] * [auto-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:46:59.669270    9527 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:46:59.669304    9527 notify.go:220] Checking for updates...
	I0419 12:46:59.677265    9527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:46:59.680239    9527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:46:59.683261    9527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:46:59.686252    9527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:46:59.689225    9527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:46:59.692666    9527 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:46:59.692737    9527 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:46:59.692792    9527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:46:59.697142    9527 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:46:59.704281    9527 start.go:297] selected driver: qemu2
	I0419 12:46:59.704289    9527 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:46:59.704295    9527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:46:59.706594    9527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:46:59.709169    9527 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:46:59.712372    9527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:46:59.712416    9527 cni.go:84] Creating CNI manager for ""
	I0419 12:46:59.712425    9527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:46:59.712432    9527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:46:59.712463    9527 start.go:340] cluster config:
	{Name:auto-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:46:59.717046    9527 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:46:59.725233    9527 out.go:177] * Starting "auto-342000" primary control-plane node in "auto-342000" cluster
	I0419 12:46:59.729290    9527 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:46:59.729303    9527 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:46:59.729309    9527 cache.go:56] Caching tarball of preloaded images
	I0419 12:46:59.729361    9527 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:46:59.729375    9527 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:46:59.729420    9527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/auto-342000/config.json ...
	I0419 12:46:59.729431    9527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/auto-342000/config.json: {Name:mkcb1a4f5ccca3694d689be50a04fdfefc5085ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:46:59.729842    9527 start.go:360] acquireMachinesLock for auto-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:46:59.729873    9527 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "auto-342000"
	I0419 12:46:59.729884    9527 start.go:93] Provisioning new machine with config: &{Name:auto-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:46:59.729914    9527 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:46:59.733289    9527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:46:59.749704    9527 start.go:159] libmachine.API.Create for "auto-342000" (driver="qemu2")
	I0419 12:46:59.749728    9527 client.go:168] LocalClient.Create starting
	I0419 12:46:59.749785    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:46:59.749817    9527 main.go:141] libmachine: Decoding PEM data...
	I0419 12:46:59.749830    9527 main.go:141] libmachine: Parsing certificate...
	I0419 12:46:59.749876    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:46:59.749898    9527 main.go:141] libmachine: Decoding PEM data...
	I0419 12:46:59.749906    9527 main.go:141] libmachine: Parsing certificate...
	I0419 12:46:59.750425    9527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:46:59.890021    9527 main.go:141] libmachine: Creating SSH key...
	I0419 12:46:59.982299    9527 main.go:141] libmachine: Creating Disk image...
	I0419 12:46:59.982315    9527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:46:59.982532    9527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2
	I0419 12:46:59.995472    9527 main.go:141] libmachine: STDOUT: 
	I0419 12:46:59.995492    9527 main.go:141] libmachine: STDERR: 
	I0419 12:46:59.995553    9527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2 +20000M
	I0419 12:47:00.006963    9527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:00.006990    9527 main.go:141] libmachine: STDERR: 
	I0419 12:47:00.007002    9527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2
	I0419 12:47:00.007007    9527 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:00.007033    9527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:ea:7f:8e:8b:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2
	I0419 12:47:00.008866    9527 main.go:141] libmachine: STDOUT: 
	I0419 12:47:00.008883    9527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:00.008902    9527 client.go:171] duration metric: took 259.174917ms to LocalClient.Create
	I0419 12:47:02.009776    9527 start.go:128] duration metric: took 2.279891875s to createHost
	I0419 12:47:02.009843    9527 start.go:83] releasing machines lock for "auto-342000", held for 2.280013625s
	W0419 12:47:02.009891    9527 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:02.022756    9527 out.go:177] * Deleting "auto-342000" in qemu2 ...
	W0419 12:47:02.045042    9527 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:02.045066    9527 start.go:728] Will try again in 5 seconds ...
	I0419 12:47:07.047263    9527 start.go:360] acquireMachinesLock for auto-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:07.047987    9527 start.go:364] duration metric: took 563.958µs to acquireMachinesLock for "auto-342000"
	I0419 12:47:07.048705    9527 start.go:93] Provisioning new machine with config: &{Name:auto-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:07.048965    9527 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:07.053641    9527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:07.103197    9527 start.go:159] libmachine.API.Create for "auto-342000" (driver="qemu2")
	I0419 12:47:07.103257    9527 client.go:168] LocalClient.Create starting
	I0419 12:47:07.103369    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:07.103430    9527 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:07.103444    9527 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:07.103508    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:07.103552    9527 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:07.103562    9527 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:07.104127    9527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:07.248453    9527 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:07.298898    9527 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:07.298903    9527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:07.299073    9527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2
	I0419 12:47:07.316834    9527 main.go:141] libmachine: STDOUT: 
	I0419 12:47:07.316855    9527 main.go:141] libmachine: STDERR: 
	I0419 12:47:07.316908    9527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2 +20000M
	I0419 12:47:07.328020    9527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:07.328045    9527 main.go:141] libmachine: STDERR: 
	I0419 12:47:07.328057    9527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2
	I0419 12:47:07.328061    9527 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:07.328093    9527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:9e:50:7f:f6:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/auto-342000/disk.qcow2
	I0419 12:47:07.329868    9527 main.go:141] libmachine: STDOUT: 
	I0419 12:47:07.329887    9527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:07.329900    9527 client.go:171] duration metric: took 226.640292ms to LocalClient.Create
	I0419 12:47:09.331935    9527 start.go:128] duration metric: took 2.282988541s to createHost
	I0419 12:47:09.331976    9527 start.go:83] releasing machines lock for "auto-342000", held for 2.283998625s
	W0419 12:47:09.332100    9527 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:09.340311    9527 out.go:177] 
	W0419 12:47:09.343252    9527 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:47:09.343260    9527 out.go:239] * 
	* 
	W0419 12:47:09.343707    9527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:47:09.354256    9527 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.965609041s)

                                                
                                                
-- stdout --
	* [kindnet-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-342000" primary control-plane node in "kindnet-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:47:11.685354    9639 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:47:11.685501    9639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:11.685505    9639 out.go:304] Setting ErrFile to fd 2...
	I0419 12:47:11.685507    9639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:11.685629    9639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:47:11.686748    9639 out.go:298] Setting JSON to false
	I0419 12:47:11.702869    9639 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6402,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:47:11.702934    9639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:47:11.708625    9639 out.go:177] * [kindnet-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:47:11.716548    9639 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:47:11.716596    9639 notify.go:220] Checking for updates...
	I0419 12:47:11.721633    9639 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:47:11.724511    9639 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:47:11.727554    9639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:47:11.730541    9639 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:47:11.733470    9639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:47:11.736930    9639 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:47:11.736999    9639 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:47:11.737043    9639 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:47:11.740517    9639 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:47:11.747471    9639 start.go:297] selected driver: qemu2
	I0419 12:47:11.747478    9639 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:47:11.747484    9639 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:47:11.749799    9639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:47:11.752577    9639 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:47:11.755574    9639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:47:11.755603    9639 cni.go:84] Creating CNI manager for "kindnet"
	I0419 12:47:11.755606    9639 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0419 12:47:11.755637    9639 start.go:340] cluster config:
	{Name:kindnet-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:47:11.760099    9639 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:47:11.768556    9639 out.go:177] * Starting "kindnet-342000" primary control-plane node in "kindnet-342000" cluster
	I0419 12:47:11.772613    9639 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:47:11.772626    9639 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:47:11.772633    9639 cache.go:56] Caching tarball of preloaded images
	I0419 12:47:11.772692    9639 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:47:11.772697    9639 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:47:11.772741    9639 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kindnet-342000/config.json ...
	I0419 12:47:11.772751    9639 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kindnet-342000/config.json: {Name:mk27215a57ebb23e4891234c8a97332b7647c96a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:47:11.772956    9639 start.go:360] acquireMachinesLock for kindnet-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:11.772986    9639 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "kindnet-342000"
	I0419 12:47:11.772996    9639 start.go:93] Provisioning new machine with config: &{Name:kindnet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:11.773025    9639 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:11.781520    9639 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:11.796935    9639 start.go:159] libmachine.API.Create for "kindnet-342000" (driver="qemu2")
	I0419 12:47:11.796961    9639 client.go:168] LocalClient.Create starting
	I0419 12:47:11.797018    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:11.797049    9639 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:11.797062    9639 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:11.797100    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:11.797123    9639 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:11.797132    9639 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:11.797550    9639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:11.934631    9639 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:12.160250    9639 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:12.160264    9639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:12.160469    9639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2
	I0419 12:47:12.173915    9639 main.go:141] libmachine: STDOUT: 
	I0419 12:47:12.173945    9639 main.go:141] libmachine: STDERR: 
	I0419 12:47:12.174011    9639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2 +20000M
	I0419 12:47:12.185536    9639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:12.185553    9639 main.go:141] libmachine: STDERR: 
	I0419 12:47:12.185572    9639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2
	I0419 12:47:12.185580    9639 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:12.185615    9639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:3b:5f:8b:8b:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2
	I0419 12:47:12.187296    9639 main.go:141] libmachine: STDOUT: 
	I0419 12:47:12.187312    9639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:12.187332    9639 client.go:171] duration metric: took 390.373875ms to LocalClient.Create
	I0419 12:47:14.189422    9639 start.go:128] duration metric: took 2.416427958s to createHost
	I0419 12:47:14.189513    9639 start.go:83] releasing machines lock for "kindnet-342000", held for 2.4165525s
	W0419 12:47:14.189553    9639 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:14.196186    9639 out.go:177] * Deleting "kindnet-342000" in qemu2 ...
	W0419 12:47:14.225729    9639 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:14.225743    9639 start.go:728] Will try again in 5 seconds ...
	I0419 12:47:19.227943    9639 start.go:360] acquireMachinesLock for kindnet-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:19.228469    9639 start.go:364] duration metric: took 411.958µs to acquireMachinesLock for "kindnet-342000"
	I0419 12:47:19.228542    9639 start.go:93] Provisioning new machine with config: &{Name:kindnet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:19.228818    9639 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:19.238479    9639 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:19.284472    9639 start.go:159] libmachine.API.Create for "kindnet-342000" (driver="qemu2")
	I0419 12:47:19.284515    9639 client.go:168] LocalClient.Create starting
	I0419 12:47:19.284650    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:19.284717    9639 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:19.284731    9639 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:19.284804    9639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:19.284850    9639 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:19.284876    9639 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:19.285432    9639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:19.430715    9639 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:19.553891    9639 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:19.553896    9639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:19.554083    9639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2
	I0419 12:47:19.566744    9639 main.go:141] libmachine: STDOUT: 
	I0419 12:47:19.566764    9639 main.go:141] libmachine: STDERR: 
	I0419 12:47:19.566837    9639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2 +20000M
	I0419 12:47:19.578201    9639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:19.578225    9639 main.go:141] libmachine: STDERR: 
	I0419 12:47:19.578237    9639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2
	I0419 12:47:19.578243    9639 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:19.578277    9639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:67:f2:c4:6b:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kindnet-342000/disk.qcow2
	I0419 12:47:19.580006    9639 main.go:141] libmachine: STDOUT: 
	I0419 12:47:19.580034    9639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:19.580048    9639 client.go:171] duration metric: took 295.535167ms to LocalClient.Create
	I0419 12:47:21.582221    9639 start.go:128] duration metric: took 2.353419875s to createHost
	I0419 12:47:21.582336    9639 start.go:83] releasing machines lock for "kindnet-342000", held for 2.353895084s
	W0419 12:47:21.582748    9639 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:21.596250    9639 out.go:177] 
	W0419 12:47:21.600491    9639 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:47:21.600529    9639 out.go:239] * 
	* 
	W0419 12:47:21.602706    9639 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:47:21.608290    9639 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.764960417s)

                                                
                                                
-- stdout --
	* [calico-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-342000" primary control-plane node in "calico-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:47:24.043166    9753 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:47:24.043319    9753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:24.043323    9753 out.go:304] Setting ErrFile to fd 2...
	I0419 12:47:24.043325    9753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:24.043487    9753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:47:24.044521    9753 out.go:298] Setting JSON to false
	I0419 12:47:24.060819    9753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6415,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:47:24.060886    9753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:47:24.067790    9753 out.go:177] * [calico-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:47:24.075729    9753 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:47:24.075786    9753 notify.go:220] Checking for updates...
	I0419 12:47:24.081239    9753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:47:24.084742    9753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:47:24.087762    9753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:47:24.090724    9753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:47:24.093746    9753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:47:24.097134    9753 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:47:24.097204    9753 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:47:24.097265    9753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:47:24.101779    9753 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:47:24.108717    9753 start.go:297] selected driver: qemu2
	I0419 12:47:24.108724    9753 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:47:24.108730    9753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:47:24.111081    9753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:47:24.113764    9753 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:47:24.117808    9753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:47:24.117848    9753 cni.go:84] Creating CNI manager for "calico"
	I0419 12:47:24.117866    9753 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0419 12:47:24.117896    9753 start.go:340] cluster config:
	{Name:calico-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:47:24.122604    9753 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:47:24.130650    9753 out.go:177] * Starting "calico-342000" primary control-plane node in "calico-342000" cluster
	I0419 12:47:24.136718    9753 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:47:24.136756    9753 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:47:24.136764    9753 cache.go:56] Caching tarball of preloaded images
	I0419 12:47:24.136848    9753 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:47:24.136857    9753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:47:24.136911    9753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/calico-342000/config.json ...
	I0419 12:47:24.136922    9753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/calico-342000/config.json: {Name:mk11626c22ddf88e8b9c8842e842901a8172af0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:47:24.137162    9753 start.go:360] acquireMachinesLock for calico-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:24.137196    9753 start.go:364] duration metric: took 28.834µs to acquireMachinesLock for "calico-342000"
	I0419 12:47:24.137210    9753 start.go:93] Provisioning new machine with config: &{Name:calico-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:24.137242    9753 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:24.144726    9753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:24.160391    9753 start.go:159] libmachine.API.Create for "calico-342000" (driver="qemu2")
	I0419 12:47:24.160423    9753 client.go:168] LocalClient.Create starting
	I0419 12:47:24.160490    9753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:24.160521    9753 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:24.160532    9753 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:24.160577    9753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:24.160600    9753 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:24.160611    9753 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:24.160959    9753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:24.297560    9753 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:24.368053    9753 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:24.368064    9753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:24.368284    9753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2
	I0419 12:47:24.380998    9753 main.go:141] libmachine: STDOUT: 
	I0419 12:47:24.381020    9753 main.go:141] libmachine: STDERR: 
	I0419 12:47:24.381092    9753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2 +20000M
	I0419 12:47:24.392270    9753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:24.392288    9753 main.go:141] libmachine: STDERR: 
	I0419 12:47:24.392306    9753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2
	I0419 12:47:24.392311    9753 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:24.392351    9753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7a:ad:f9:46:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2
	I0419 12:47:24.394220    9753 main.go:141] libmachine: STDOUT: 
	I0419 12:47:24.394238    9753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:24.394257    9753 client.go:171] duration metric: took 233.834125ms to LocalClient.Create
	I0419 12:47:26.396342    9753 start.go:128] duration metric: took 2.259139084s to createHost
	I0419 12:47:26.396411    9753 start.go:83] releasing machines lock for "calico-342000", held for 2.259240166s
	W0419 12:47:26.396440    9753 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:26.410319    9753 out.go:177] * Deleting "calico-342000" in qemu2 ...
	W0419 12:47:26.425785    9753 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:26.425794    9753 start.go:728] Will try again in 5 seconds ...
	I0419 12:47:31.428000    9753 start.go:360] acquireMachinesLock for calico-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:31.428530    9753 start.go:364] duration metric: took 419.834µs to acquireMachinesLock for "calico-342000"
	I0419 12:47:31.428666    9753 start.go:93] Provisioning new machine with config: &{Name:calico-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:31.428989    9753 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:31.437949    9753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:31.487507    9753 start.go:159] libmachine.API.Create for "calico-342000" (driver="qemu2")
	I0419 12:47:31.487564    9753 client.go:168] LocalClient.Create starting
	I0419 12:47:31.487685    9753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:31.487756    9753 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:31.487773    9753 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:31.487855    9753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:31.487900    9753 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:31.487910    9753 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:31.488564    9753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:31.634462    9753 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:31.710815    9753 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:31.710824    9753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:31.711012    9753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2
	I0419 12:47:31.723815    9753 main.go:141] libmachine: STDOUT: 
	I0419 12:47:31.723837    9753 main.go:141] libmachine: STDERR: 
	I0419 12:47:31.723909    9753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2 +20000M
	I0419 12:47:31.734893    9753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:31.734913    9753 main.go:141] libmachine: STDERR: 
	I0419 12:47:31.734940    9753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2
	I0419 12:47:31.734946    9753 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:31.734982    9753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7e:0e:6e:e8:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/calico-342000/disk.qcow2
	I0419 12:47:31.736847    9753 main.go:141] libmachine: STDOUT: 
	I0419 12:47:31.736869    9753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:31.736883    9753 client.go:171] duration metric: took 249.319166ms to LocalClient.Create
	I0419 12:47:33.739082    9753 start.go:128] duration metric: took 2.310091875s to createHost
	I0419 12:47:33.739173    9753 start.go:83] releasing machines lock for "calico-342000", held for 2.310671209s
	W0419 12:47:33.739466    9753 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:33.749198    9753 out.go:177] 
	W0419 12:47:33.753104    9753 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:47:33.753164    9753 out.go:239] * 
	* 
	W0419 12:47:33.755785    9753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:47:33.768162    9753 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.777722084s)

                                                
                                                
-- stdout --
	* [custom-flannel-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-342000" primary control-plane node in "custom-flannel-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:47:36.294814    9874 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:47:36.294972    9874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:36.294975    9874 out.go:304] Setting ErrFile to fd 2...
	I0419 12:47:36.294977    9874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:36.295104    9874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:47:36.296151    9874 out.go:298] Setting JSON to false
	I0419 12:47:36.312715    9874 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6427,"bootTime":1713549629,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:47:36.312787    9874 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:47:36.318688    9874 out.go:177] * [custom-flannel-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:47:36.326506    9874 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:47:36.326579    9874 notify.go:220] Checking for updates...
	I0419 12:47:36.330638    9874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:47:36.333584    9874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:47:36.335054    9874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:47:36.338680    9874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:47:36.345568    9874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:47:36.348936    9874 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:47:36.349003    9874 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:47:36.349057    9874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:47:36.353664    9874 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:47:36.368769    9874 start.go:297] selected driver: qemu2
	I0419 12:47:36.368783    9874 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:47:36.368792    9874 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:47:36.371298    9874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:47:36.374662    9874 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:47:36.377697    9874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:47:36.377726    9874 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0419 12:47:36.377734    9874 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0419 12:47:36.377776    9874 start.go:340] cluster config:
	{Name:custom-flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:47:36.382468    9874 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:47:36.390643    9874 out.go:177] * Starting "custom-flannel-342000" primary control-plane node in "custom-flannel-342000" cluster
	I0419 12:47:36.394561    9874 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:47:36.394578    9874 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:47:36.394587    9874 cache.go:56] Caching tarball of preloaded images
	I0419 12:47:36.394663    9874 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:47:36.394677    9874 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:47:36.394733    9874 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/custom-flannel-342000/config.json ...
	I0419 12:47:36.394745    9874 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/custom-flannel-342000/config.json: {Name:mkfe3d6b409a41f9e7bd5631cf1be8e8c20c2d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:47:36.394977    9874 start.go:360] acquireMachinesLock for custom-flannel-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:36.395017    9874 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "custom-flannel-342000"
	I0419 12:47:36.395029    9874 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:36.395065    9874 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:36.403569    9874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:36.421643    9874 start.go:159] libmachine.API.Create for "custom-flannel-342000" (driver="qemu2")
	I0419 12:47:36.421678    9874 client.go:168] LocalClient.Create starting
	I0419 12:47:36.421738    9874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:36.421778    9874 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:36.421788    9874 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:36.421829    9874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:36.421855    9874 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:36.421862    9874 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:36.422260    9874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:36.558741    9874 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:36.614790    9874 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:36.614795    9874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:36.614970    9874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0419 12:47:36.627893    9874 main.go:141] libmachine: STDOUT: 
	I0419 12:47:36.627929    9874 main.go:141] libmachine: STDERR: 
	I0419 12:47:36.627989    9874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2 +20000M
	I0419 12:47:36.639264    9874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:36.639282    9874 main.go:141] libmachine: STDERR: 
	I0419 12:47:36.639297    9874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0419 12:47:36.639302    9874 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:36.639330    9874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:58:60:e4:a2:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0419 12:47:36.641144    9874 main.go:141] libmachine: STDOUT: 
	I0419 12:47:36.641160    9874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:36.641181    9874 client.go:171] duration metric: took 219.503125ms to LocalClient.Create
	I0419 12:47:38.643342    9874 start.go:128] duration metric: took 2.248290417s to createHost
	I0419 12:47:38.643446    9874 start.go:83] releasing machines lock for "custom-flannel-342000", held for 2.248468917s
	W0419 12:47:38.643503    9874 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:38.650941    9874 out.go:177] * Deleting "custom-flannel-342000" in qemu2 ...
	W0419 12:47:38.678888    9874 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:38.678922    9874 start.go:728] Will try again in 5 seconds ...
	I0419 12:47:43.680026    9874 start.go:360] acquireMachinesLock for custom-flannel-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:43.680561    9874 start.go:364] duration metric: took 408.459µs to acquireMachinesLock for "custom-flannel-342000"
	I0419 12:47:43.680639    9874 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:43.680955    9874 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:43.689201    9874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:43.738624    9874 start.go:159] libmachine.API.Create for "custom-flannel-342000" (driver="qemu2")
	I0419 12:47:43.738683    9874 client.go:168] LocalClient.Create starting
	I0419 12:47:43.738807    9874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:43.738901    9874 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:43.738919    9874 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:43.738987    9874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:43.739032    9874 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:43.739046    9874 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:43.739570    9874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:43.888056    9874 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:43.975400    9874 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:43.975408    9874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:43.975616    9874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0419 12:47:43.988361    9874 main.go:141] libmachine: STDOUT: 
	I0419 12:47:43.988382    9874 main.go:141] libmachine: STDERR: 
	I0419 12:47:43.988450    9874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2 +20000M
	I0419 12:47:43.999654    9874 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:43.999677    9874 main.go:141] libmachine: STDERR: 
	I0419 12:47:43.999692    9874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0419 12:47:43.999696    9874 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:43.999736    9874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a6:84:27:f9:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/custom-flannel-342000/disk.qcow2
	I0419 12:47:44.001582    9874 main.go:141] libmachine: STDOUT: 
	I0419 12:47:44.001607    9874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:44.001623    9874 client.go:171] duration metric: took 262.93875ms to LocalClient.Create
	I0419 12:47:46.003697    9874 start.go:128] duration metric: took 2.32277675s to createHost
	I0419 12:47:46.003729    9874 start.go:83] releasing machines lock for "custom-flannel-342000", held for 2.323198083s
	W0419 12:47:46.003847    9874 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:46.015128    9874 out.go:177] 
	W0419 12:47:46.019090    9874 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:47:46.019096    9874 out.go:239] * 
	* 
	W0419 12:47:46.019689    9874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:47:46.033157    9874 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.757965s)

                                                
                                                
-- stdout --
	* [false-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-342000" primary control-plane node in "false-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:47:48.515135    9995 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:47:48.515324    9995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:48.515329    9995 out.go:304] Setting ErrFile to fd 2...
	I0419 12:47:48.515331    9995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:47:48.515482    9995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:47:48.516922    9995 out.go:298] Setting JSON to false
	I0419 12:47:48.535303    9995 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6439,"bootTime":1713549629,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:47:48.535411    9995 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:47:48.540429    9995 out.go:177] * [false-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:47:48.544451    9995 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:47:48.544504    9995 notify.go:220] Checking for updates...
	I0419 12:47:48.548288    9995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:47:48.552477    9995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:47:48.555477    9995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:47:48.556602    9995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:47:48.559507    9995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:47:48.562905    9995 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:47:48.562984    9995 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:47:48.563036    9995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:47:48.567343    9995 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:47:48.574466    9995 start.go:297] selected driver: qemu2
	I0419 12:47:48.574479    9995 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:47:48.574489    9995 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:47:48.577006    9995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:47:48.580512    9995 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:47:48.583536    9995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:47:48.583570    9995 cni.go:84] Creating CNI manager for "false"
	I0419 12:47:48.583600    9995 start.go:340] cluster config:
	{Name:false-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:47:48.588539    9995 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:47:48.608516    9995 out.go:177] * Starting "false-342000" primary control-plane node in "false-342000" cluster
	I0419 12:47:48.612507    9995 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:47:48.612539    9995 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:47:48.612547    9995 cache.go:56] Caching tarball of preloaded images
	I0419 12:47:48.612632    9995 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:47:48.612639    9995 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:47:48.612700    9995 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/false-342000/config.json ...
	I0419 12:47:48.612711    9995 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/false-342000/config.json: {Name:mk4fd274814851bdbeafe5999595e452d6555c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:47:48.613056    9995 start.go:360] acquireMachinesLock for false-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:48.613087    9995 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "false-342000"
	I0419 12:47:48.613107    9995 start.go:93] Provisioning new machine with config: &{Name:false-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:48.613139    9995 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:48.621461    9995 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:48.637484    9995 start.go:159] libmachine.API.Create for "false-342000" (driver="qemu2")
	I0419 12:47:48.637509    9995 client.go:168] LocalClient.Create starting
	I0419 12:47:48.637573    9995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:48.637607    9995 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:48.637617    9995 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:48.637659    9995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:48.637684    9995 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:48.637694    9995 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:48.638135    9995 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:48.775774    9995 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:48.850790    9995 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:48.850796    9995 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:48.850980    9995 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2
	I0419 12:47:48.863668    9995 main.go:141] libmachine: STDOUT: 
	I0419 12:47:48.863688    9995 main.go:141] libmachine: STDERR: 
	I0419 12:47:48.863736    9995 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2 +20000M
	I0419 12:47:48.874926    9995 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:48.874948    9995 main.go:141] libmachine: STDERR: 
	I0419 12:47:48.874971    9995 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2
	I0419 12:47:48.874975    9995 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:48.875005    9995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:c5:4c:92:4e:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2
	I0419 12:47:48.876919    9995 main.go:141] libmachine: STDOUT: 
	I0419 12:47:48.876935    9995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:48.876956    9995 client.go:171] duration metric: took 239.446458ms to LocalClient.Create
	I0419 12:47:50.879120    9995 start.go:128] duration metric: took 2.265999125s to createHost
	I0419 12:47:50.879195    9995 start.go:83] releasing machines lock for "false-342000", held for 2.266151792s
	W0419 12:47:50.879247    9995 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:50.890978    9995 out.go:177] * Deleting "false-342000" in qemu2 ...
	W0419 12:47:50.913221    9995 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:50.913249    9995 start.go:728] Will try again in 5 seconds ...
	I0419 12:47:55.914009    9995 start.go:360] acquireMachinesLock for false-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:47:55.914674    9995 start.go:364] duration metric: took 556.042µs to acquireMachinesLock for "false-342000"
	I0419 12:47:55.914852    9995 start.go:93] Provisioning new machine with config: &{Name:false-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:47:55.915217    9995 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:47:55.924872    9995 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:47:55.974888    9995 start.go:159] libmachine.API.Create for "false-342000" (driver="qemu2")
	I0419 12:47:55.974948    9995 client.go:168] LocalClient.Create starting
	I0419 12:47:55.975062    9995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:47:55.975129    9995 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:55.975144    9995 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:55.975208    9995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:47:55.975253    9995 main.go:141] libmachine: Decoding PEM data...
	I0419 12:47:55.975267    9995 main.go:141] libmachine: Parsing certificate...
	I0419 12:47:55.975856    9995 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:47:56.126229    9995 main.go:141] libmachine: Creating SSH key...
	I0419 12:47:56.177380    9995 main.go:141] libmachine: Creating Disk image...
	I0419 12:47:56.177386    9995 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:47:56.177585    9995 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2
	I0419 12:47:56.190585    9995 main.go:141] libmachine: STDOUT: 
	I0419 12:47:56.190606    9995 main.go:141] libmachine: STDERR: 
	I0419 12:47:56.190659    9995 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2 +20000M
	I0419 12:47:56.202706    9995 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:47:56.202729    9995 main.go:141] libmachine: STDERR: 
	I0419 12:47:56.202743    9995 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2
	I0419 12:47:56.202748    9995 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:47:56.202788    9995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:24:ec:d5:12:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/false-342000/disk.qcow2
	I0419 12:47:56.204744    9995 main.go:141] libmachine: STDOUT: 
	I0419 12:47:56.204760    9995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:47:56.204777    9995 client.go:171] duration metric: took 229.824583ms to LocalClient.Create
	I0419 12:47:58.206827    9995 start.go:128] duration metric: took 2.291625625s to createHost
	I0419 12:47:58.206874    9995 start.go:83] releasing machines lock for "false-342000", held for 2.292215791s
	W0419 12:47:58.207012    9995 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:47:58.213450    9995 out.go:177] 
	W0419 12:47:58.217451    9995 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:47:58.217459    9995 out.go:239] * 
	* 
	W0419 12:47:58.218257    9995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:47:58.231213    9995 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.741188375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-342000" primary control-plane node in "enable-default-cni-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:48:00.446102   10105 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:48:00.446243   10105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:00.446247   10105 out.go:304] Setting ErrFile to fd 2...
	I0419 12:48:00.446249   10105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:00.446375   10105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:48:00.447511   10105 out.go:298] Setting JSON to false
	I0419 12:48:00.463970   10105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6451,"bootTime":1713549629,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:48:00.464030   10105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:48:00.470724   10105 out.go:177] * [enable-default-cni-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:48:00.476624   10105 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:48:00.480708   10105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:48:00.476706   10105 notify.go:220] Checking for updates...
	I0419 12:48:00.486646   10105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:48:00.489653   10105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:48:00.492557   10105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:48:00.495618   10105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:48:00.498971   10105 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:48:00.499037   10105 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:48:00.499082   10105 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:48:00.502544   10105 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:48:00.509637   10105 start.go:297] selected driver: qemu2
	I0419 12:48:00.509643   10105 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:48:00.509649   10105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:48:00.511913   10105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:48:00.514629   10105 out.go:177] * Automatically selected the socket_vmnet network
	E0419 12:48:00.517692   10105 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0419 12:48:00.517704   10105 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:48:00.517743   10105 cni.go:84] Creating CNI manager for "bridge"
	I0419 12:48:00.517747   10105 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:48:00.517784   10105 start.go:340] cluster config:
	{Name:enable-default-cni-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:48:00.522154   10105 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:48:00.530678   10105 out.go:177] * Starting "enable-default-cni-342000" primary control-plane node in "enable-default-cni-342000" cluster
	I0419 12:48:00.534691   10105 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:48:00.534706   10105 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:48:00.534716   10105 cache.go:56] Caching tarball of preloaded images
	I0419 12:48:00.534778   10105 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:48:00.534783   10105 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:48:00.534838   10105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/enable-default-cni-342000/config.json ...
	I0419 12:48:00.534848   10105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/enable-default-cni-342000/config.json: {Name:mkceae623f287dae5a722e813acf24774a36b247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:48:00.535051   10105 start.go:360] acquireMachinesLock for enable-default-cni-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:00.535086   10105 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "enable-default-cni-342000"
	I0419 12:48:00.535097   10105 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:00.535124   10105 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:00.543658   10105 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:00.559422   10105 start.go:159] libmachine.API.Create for "enable-default-cni-342000" (driver="qemu2")
	I0419 12:48:00.559447   10105 client.go:168] LocalClient.Create starting
	I0419 12:48:00.559504   10105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:00.559534   10105 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:00.559544   10105 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:00.559584   10105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:00.559607   10105 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:00.559614   10105 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:00.559959   10105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:00.699546   10105 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:00.781762   10105 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:00.781769   10105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:00.781981   10105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0419 12:48:00.794960   10105 main.go:141] libmachine: STDOUT: 
	I0419 12:48:00.794987   10105 main.go:141] libmachine: STDERR: 
	I0419 12:48:00.795056   10105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2 +20000M
	I0419 12:48:00.806330   10105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:00.806346   10105 main.go:141] libmachine: STDERR: 
	I0419 12:48:00.806358   10105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0419 12:48:00.806363   10105 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:00.806389   10105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:10:36:4e:a3:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0419 12:48:00.808163   10105 main.go:141] libmachine: STDOUT: 
	I0419 12:48:00.808179   10105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:00.808198   10105 client.go:171] duration metric: took 248.750541ms to LocalClient.Create
	I0419 12:48:02.810467   10105 start.go:128] duration metric: took 2.275366166s to createHost
	I0419 12:48:02.810541   10105 start.go:83] releasing machines lock for "enable-default-cni-342000", held for 2.275496083s
	W0419 12:48:02.810608   10105 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:02.822865   10105 out.go:177] * Deleting "enable-default-cni-342000" in qemu2 ...
	W0419 12:48:02.846261   10105 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:02.846297   10105 start.go:728] Will try again in 5 seconds ...
	I0419 12:48:07.847240   10105 start.go:360] acquireMachinesLock for enable-default-cni-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:07.847703   10105 start.go:364] duration metric: took 326.792µs to acquireMachinesLock for "enable-default-cni-342000"
	I0419 12:48:07.847819   10105 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:07.847975   10105 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:07.853542   10105 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:07.893934   10105 start.go:159] libmachine.API.Create for "enable-default-cni-342000" (driver="qemu2")
	I0419 12:48:07.893992   10105 client.go:168] LocalClient.Create starting
	I0419 12:48:07.894101   10105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:07.894175   10105 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:07.894188   10105 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:07.894248   10105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:07.894286   10105 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:07.894299   10105 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:07.894784   10105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:08.040381   10105 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:08.083561   10105 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:08.083567   10105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:08.083742   10105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0419 12:48:08.096585   10105 main.go:141] libmachine: STDOUT: 
	I0419 12:48:08.096606   10105 main.go:141] libmachine: STDERR: 
	I0419 12:48:08.096664   10105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2 +20000M
	I0419 12:48:08.108065   10105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:08.108086   10105 main.go:141] libmachine: STDERR: 
	I0419 12:48:08.108101   10105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0419 12:48:08.108106   10105 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:08.108154   10105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:fb:54:8b:04:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/enable-default-cni-342000/disk.qcow2
	I0419 12:48:08.109965   10105 main.go:141] libmachine: STDOUT: 
	I0419 12:48:08.109981   10105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:08.109993   10105 client.go:171] duration metric: took 216.00075ms to LocalClient.Create
	I0419 12:48:10.112169   10105 start.go:128] duration metric: took 2.264210833s to createHost
	I0419 12:48:10.112245   10105 start.go:83] releasing machines lock for "enable-default-cni-342000", held for 2.264575625s
	W0419 12:48:10.112698   10105 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:10.126362   10105 out.go:177] 
	W0419 12:48:10.129431   10105 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:48:10.129488   10105 out.go:239] * 
	* 
	W0419 12:48:10.132146   10105 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:48:10.144296   10105 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.786093083s)

                                                
                                                
-- stdout --
	* [flannel-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-342000" primary control-plane node in "flannel-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:48:12.409210   10218 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:48:12.409366   10218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:12.409369   10218 out.go:304] Setting ErrFile to fd 2...
	I0419 12:48:12.409372   10218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:12.409485   10218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:48:12.410529   10218 out.go:298] Setting JSON to false
	I0419 12:48:12.426601   10218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6463,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:48:12.426666   10218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:48:12.432357   10218 out.go:177] * [flannel-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:48:12.440297   10218 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:48:12.445277   10218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:48:12.440354   10218 notify.go:220] Checking for updates...
	I0419 12:48:12.448335   10218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:48:12.451243   10218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:48:12.454253   10218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:48:12.457326   10218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:48:12.460695   10218 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:48:12.460764   10218 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:48:12.460810   10218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:48:12.465287   10218 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:48:12.472188   10218 start.go:297] selected driver: qemu2
	I0419 12:48:12.472199   10218 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:48:12.472206   10218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:48:12.474490   10218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:48:12.477260   10218 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:48:12.480394   10218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:48:12.480434   10218 cni.go:84] Creating CNI manager for "flannel"
	I0419 12:48:12.480439   10218 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0419 12:48:12.480489   10218 start.go:340] cluster config:
	{Name:flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:48:12.484706   10218 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:48:12.493222   10218 out.go:177] * Starting "flannel-342000" primary control-plane node in "flannel-342000" cluster
	I0419 12:48:12.497297   10218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:48:12.497314   10218 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:48:12.497320   10218 cache.go:56] Caching tarball of preloaded images
	I0419 12:48:12.497380   10218 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:48:12.497385   10218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:48:12.497443   10218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/flannel-342000/config.json ...
	I0419 12:48:12.497458   10218 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/flannel-342000/config.json: {Name:mkd34c07c9eac61ad25413bd74ef7f57ba1e2d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:48:12.497762   10218 start.go:360] acquireMachinesLock for flannel-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:12.497793   10218 start.go:364] duration metric: took 25.959µs to acquireMachinesLock for "flannel-342000"
	I0419 12:48:12.497804   10218 start.go:93] Provisioning new machine with config: &{Name:flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:12.497849   10218 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:12.505078   10218 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:12.519977   10218 start.go:159] libmachine.API.Create for "flannel-342000" (driver="qemu2")
	I0419 12:48:12.520004   10218 client.go:168] LocalClient.Create starting
	I0419 12:48:12.520059   10218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:12.520093   10218 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:12.520105   10218 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:12.520151   10218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:12.520173   10218 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:12.520182   10218 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:12.520520   10218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:12.658780   10218 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:12.767889   10218 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:12.767895   10218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:12.768379   10218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2
	I0419 12:48:12.781093   10218 main.go:141] libmachine: STDOUT: 
	I0419 12:48:12.781120   10218 main.go:141] libmachine: STDERR: 
	I0419 12:48:12.781179   10218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2 +20000M
	I0419 12:48:12.792459   10218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:12.792485   10218 main.go:141] libmachine: STDERR: 
	I0419 12:48:12.792507   10218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2
	I0419 12:48:12.792512   10218 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:12.792546   10218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:eb:77:ad:75:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2
	I0419 12:48:12.794259   10218 main.go:141] libmachine: STDOUT: 
	I0419 12:48:12.794274   10218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:12.794293   10218 client.go:171] duration metric: took 274.290583ms to LocalClient.Create
	I0419 12:48:14.796455   10218 start.go:128] duration metric: took 2.298628625s to createHost
	I0419 12:48:14.796534   10218 start.go:83] releasing machines lock for "flannel-342000", held for 2.298783084s
	W0419 12:48:14.796609   10218 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:14.807993   10218 out.go:177] * Deleting "flannel-342000" in qemu2 ...
	W0419 12:48:14.837348   10218 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:14.837387   10218 start.go:728] Will try again in 5 seconds ...
	I0419 12:48:19.839456   10218 start.go:360] acquireMachinesLock for flannel-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:19.839856   10218 start.go:364] duration metric: took 337.583µs to acquireMachinesLock for "flannel-342000"
	I0419 12:48:19.839899   10218 start.go:93] Provisioning new machine with config: &{Name:flannel-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:19.840150   10218 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:19.845730   10218 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:19.882193   10218 start.go:159] libmachine.API.Create for "flannel-342000" (driver="qemu2")
	I0419 12:48:19.882243   10218 client.go:168] LocalClient.Create starting
	I0419 12:48:19.882349   10218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:19.882407   10218 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:19.882420   10218 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:19.882469   10218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:19.882508   10218 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:19.882522   10218 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:19.882992   10218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:20.026818   10218 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:20.086689   10218 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:20.086698   10218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:20.086921   10218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2
	I0419 12:48:20.100559   10218 main.go:141] libmachine: STDOUT: 
	I0419 12:48:20.100626   10218 main.go:141] libmachine: STDERR: 
	I0419 12:48:20.100693   10218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2 +20000M
	I0419 12:48:20.113477   10218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:20.113542   10218 main.go:141] libmachine: STDERR: 
	I0419 12:48:20.113564   10218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2
	I0419 12:48:20.113569   10218 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:20.113599   10218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:f8:6d:9a:1f:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/flannel-342000/disk.qcow2
	I0419 12:48:20.115747   10218 main.go:141] libmachine: STDOUT: 
	I0419 12:48:20.115767   10218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:20.115779   10218 client.go:171] duration metric: took 233.535166ms to LocalClient.Create
	I0419 12:48:22.118064   10218 start.go:128] duration metric: took 2.2779055s to createHost
	I0419 12:48:22.118212   10218 start.go:83] releasing machines lock for "flannel-342000", held for 2.27838875s
	W0419 12:48:22.118631   10218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:22.132280   10218 out.go:177] 
	W0419 12:48:22.135386   10218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:48:22.135425   10218 out.go:239] * 
	* 
	W0419 12:48:22.138112   10218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:48:22.150279   10218 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.789285875s)

                                                
                                                
-- stdout --
	* [bridge-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-342000" primary control-plane node in "bridge-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:48:24.593937   10338 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:48:24.594082   10338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:24.594085   10338 out.go:304] Setting ErrFile to fd 2...
	I0419 12:48:24.594087   10338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:24.594216   10338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:48:24.595323   10338 out.go:298] Setting JSON to false
	I0419 12:48:24.612385   10338 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6475,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:48:24.612459   10338 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:48:24.615352   10338 out.go:177] * [bridge-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:48:24.627922   10338 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:48:24.622976   10338 notify.go:220] Checking for updates...
	I0419 12:48:24.633925   10338 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:48:24.637808   10338 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:48:24.640908   10338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:48:24.646810   10338 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:48:24.649942   10338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:48:24.653253   10338 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:48:24.653319   10338 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:48:24.653370   10338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:48:24.655913   10338 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:48:24.662925   10338 start.go:297] selected driver: qemu2
	I0419 12:48:24.662933   10338 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:48:24.662938   10338 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:48:24.665422   10338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:48:24.666665   10338 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:48:24.669961   10338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:48:24.669994   10338 cni.go:84] Creating CNI manager for "bridge"
	I0419 12:48:24.670001   10338 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:48:24.670033   10338 start.go:340] cluster config:
	{Name:bridge-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:48:24.674692   10338 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:48:24.682818   10338 out.go:177] * Starting "bridge-342000" primary control-plane node in "bridge-342000" cluster
	I0419 12:48:24.686918   10338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:48:24.686941   10338 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:48:24.686948   10338 cache.go:56] Caching tarball of preloaded images
	I0419 12:48:24.687005   10338 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:48:24.687009   10338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:48:24.687068   10338 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/bridge-342000/config.json ...
	I0419 12:48:24.687078   10338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/bridge-342000/config.json: {Name:mkd76d9317e268a1cd334eb809e1420c9acd0e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:48:24.687318   10338 start.go:360] acquireMachinesLock for bridge-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:24.687347   10338 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "bridge-342000"
	I0419 12:48:24.687358   10338 start.go:93] Provisioning new machine with config: &{Name:bridge-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:24.687381   10338 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:24.695961   10338 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:24.710783   10338 start.go:159] libmachine.API.Create for "bridge-342000" (driver="qemu2")
	I0419 12:48:24.710813   10338 client.go:168] LocalClient.Create starting
	I0419 12:48:24.710879   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:24.710910   10338 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:24.710918   10338 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:24.710957   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:24.710979   10338 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:24.710989   10338 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:24.711418   10338 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:24.847762   10338 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:24.905228   10338 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:24.905234   10338 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:24.905427   10338 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2
	I0419 12:48:24.918621   10338 main.go:141] libmachine: STDOUT: 
	I0419 12:48:24.918644   10338 main.go:141] libmachine: STDERR: 
	I0419 12:48:24.918699   10338 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2 +20000M
	I0419 12:48:24.929930   10338 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:24.929948   10338 main.go:141] libmachine: STDERR: 
	I0419 12:48:24.929962   10338 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2
	I0419 12:48:24.929967   10338 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:24.929997   10338 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:79:49:e6:15:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2
	I0419 12:48:24.931858   10338 main.go:141] libmachine: STDOUT: 
	I0419 12:48:24.931875   10338 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:24.931893   10338 client.go:171] duration metric: took 221.079333ms to LocalClient.Create
	I0419 12:48:26.934065   10338 start.go:128] duration metric: took 2.246699584s to createHost
	I0419 12:48:26.934178   10338 start.go:83] releasing machines lock for "bridge-342000", held for 2.246871959s
	W0419 12:48:26.934274   10338 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:26.945811   10338 out.go:177] * Deleting "bridge-342000" in qemu2 ...
	W0419 12:48:26.972917   10338 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:26.972972   10338 start.go:728] Will try again in 5 seconds ...
	I0419 12:48:31.975070   10338 start.go:360] acquireMachinesLock for bridge-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:31.975405   10338 start.go:364] duration metric: took 244.084µs to acquireMachinesLock for "bridge-342000"
	I0419 12:48:31.975487   10338 start.go:93] Provisioning new machine with config: &{Name:bridge-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:31.975736   10338 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:31.985076   10338 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:32.020725   10338 start.go:159] libmachine.API.Create for "bridge-342000" (driver="qemu2")
	I0419 12:48:32.020770   10338 client.go:168] LocalClient.Create starting
	I0419 12:48:32.020867   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:32.020935   10338 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:32.020951   10338 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:32.021003   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:32.021041   10338 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:32.021048   10338 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:32.021671   10338 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:32.164835   10338 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:32.285786   10338 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:32.285793   10338 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:32.285973   10338 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2
	I0419 12:48:32.298655   10338 main.go:141] libmachine: STDOUT: 
	I0419 12:48:32.298678   10338 main.go:141] libmachine: STDERR: 
	I0419 12:48:32.298740   10338 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2 +20000M
	I0419 12:48:32.312545   10338 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:32.312565   10338 main.go:141] libmachine: STDERR: 
	I0419 12:48:32.312574   10338 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2
	I0419 12:48:32.312578   10338 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:32.312605   10338 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:d1:e2:6d:40:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/bridge-342000/disk.qcow2
	I0419 12:48:32.314303   10338 main.go:141] libmachine: STDOUT: 
	I0419 12:48:32.314319   10338 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:32.314331   10338 client.go:171] duration metric: took 293.561542ms to LocalClient.Create
	I0419 12:48:34.316422   10338 start.go:128] duration metric: took 2.340719125s to createHost
	I0419 12:48:34.316498   10338 start.go:83] releasing machines lock for "bridge-342000", held for 2.341115167s
	W0419 12:48:34.316687   10338 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:34.327135   10338 out.go:177] 
	W0419 12:48:34.331093   10338 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:48:34.331116   10338 out.go:239] * 
	* 
	W0419 12:48:34.333790   10338 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:48:34.342064   10338 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-342000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.764942459s)

                                                
                                                
-- stdout --
	* [kubenet-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-342000" primary control-plane node in "kubenet-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:48:36.686588   10448 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:48:36.686725   10448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:36.686728   10448 out.go:304] Setting ErrFile to fd 2...
	I0419 12:48:36.686732   10448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:36.686866   10448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:48:36.687926   10448 out.go:298] Setting JSON to false
	I0419 12:48:36.704276   10448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6487,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:48:36.704352   10448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:48:36.708756   10448 out.go:177] * [kubenet-342000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:48:36.721615   10448 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:48:36.718742   10448 notify.go:220] Checking for updates...
	I0419 12:48:36.727622   10448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:48:36.729269   10448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:48:36.736635   10448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:48:36.739515   10448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:48:36.742619   10448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:48:36.745925   10448 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:48:36.745997   10448 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:48:36.746043   10448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:48:36.750503   10448 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:48:36.757562   10448 start.go:297] selected driver: qemu2
	I0419 12:48:36.757567   10448 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:48:36.757573   10448 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:48:36.759890   10448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:48:36.763586   10448 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:48:36.766726   10448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:48:36.766779   10448 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0419 12:48:36.766812   10448 start.go:340] cluster config:
	{Name:kubenet-342000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:48:36.771542   10448 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:48:36.779539   10448 out.go:177] * Starting "kubenet-342000" primary control-plane node in "kubenet-342000" cluster
	I0419 12:48:36.783630   10448 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:48:36.783649   10448 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:48:36.783659   10448 cache.go:56] Caching tarball of preloaded images
	I0419 12:48:36.783738   10448 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:48:36.783749   10448 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:48:36.783802   10448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kubenet-342000/config.json ...
	I0419 12:48:36.783819   10448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/kubenet-342000/config.json: {Name:mkf1295236a0256e05266dea50d9901b50796496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:48:36.784053   10448 start.go:360] acquireMachinesLock for kubenet-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:36.784087   10448 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "kubenet-342000"
	I0419 12:48:36.784099   10448 start.go:93] Provisioning new machine with config: &{Name:kubenet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:36.784130   10448 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:36.791578   10448 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:36.807768   10448 start.go:159] libmachine.API.Create for "kubenet-342000" (driver="qemu2")
	I0419 12:48:36.807798   10448 client.go:168] LocalClient.Create starting
	I0419 12:48:36.807869   10448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:36.807899   10448 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:36.807914   10448 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:36.807954   10448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:36.807979   10448 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:36.807985   10448 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:36.808345   10448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:36.946857   10448 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:37.027147   10448 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:37.027158   10448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:37.027361   10448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2
	I0419 12:48:37.040124   10448 main.go:141] libmachine: STDOUT: 
	I0419 12:48:37.040142   10448 main.go:141] libmachine: STDERR: 
	I0419 12:48:37.040197   10448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2 +20000M
	I0419 12:48:37.051391   10448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:37.051409   10448 main.go:141] libmachine: STDERR: 
	I0419 12:48:37.051433   10448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2
	I0419 12:48:37.051437   10448 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:37.051469   10448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:2a:ed:2d:c1:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2
	I0419 12:48:37.053188   10448 main.go:141] libmachine: STDOUT: 
	I0419 12:48:37.053202   10448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:37.053218   10448 client.go:171] duration metric: took 245.419667ms to LocalClient.Create
	I0419 12:48:39.055402   10448 start.go:128] duration metric: took 2.271284416s to createHost
	I0419 12:48:39.055490   10448 start.go:83] releasing machines lock for "kubenet-342000", held for 2.271445167s
	W0419 12:48:39.055622   10448 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:39.069138   10448 out.go:177] * Deleting "kubenet-342000" in qemu2 ...
	W0419 12:48:39.096171   10448 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:39.096214   10448 start.go:728] Will try again in 5 seconds ...
	I0419 12:48:44.098204   10448 start.go:360] acquireMachinesLock for kubenet-342000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:44.098426   10448 start.go:364] duration metric: took 191.875µs to acquireMachinesLock for "kubenet-342000"
	I0419 12:48:44.098459   10448 start.go:93] Provisioning new machine with config: &{Name:kubenet-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:44.098561   10448 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:44.104898   10448 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0419 12:48:44.130531   10448 start.go:159] libmachine.API.Create for "kubenet-342000" (driver="qemu2")
	I0419 12:48:44.130567   10448 client.go:168] LocalClient.Create starting
	I0419 12:48:44.130648   10448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:44.130707   10448 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:44.130717   10448 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:44.130760   10448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:44.130791   10448 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:44.130801   10448 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:44.131231   10448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:44.272468   10448 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:44.349165   10448 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:44.349171   10448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:44.349380   10448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2
	I0419 12:48:44.362056   10448 main.go:141] libmachine: STDOUT: 
	I0419 12:48:44.362088   10448 main.go:141] libmachine: STDERR: 
	I0419 12:48:44.362151   10448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2 +20000M
	I0419 12:48:44.373360   10448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:44.373375   10448 main.go:141] libmachine: STDERR: 
	I0419 12:48:44.373393   10448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2
	I0419 12:48:44.373397   10448 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:44.373430   10448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:c3:c6:f1:90:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/kubenet-342000/disk.qcow2
	I0419 12:48:44.375241   10448 main.go:141] libmachine: STDOUT: 
	I0419 12:48:44.375258   10448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:44.375271   10448 client.go:171] duration metric: took 244.705833ms to LocalClient.Create
	I0419 12:48:46.377426   10448 start.go:128] duration metric: took 2.278869042s to createHost
	I0419 12:48:46.377507   10448 start.go:83] releasing machines lock for "kubenet-342000", held for 2.27911775s
	W0419 12:48:46.377905   10448 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:46.390584   10448 out.go:177] 
	W0419 12:48:46.393645   10448 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:48:46.393679   10448 out.go:239] * 
	* 
	W0419 12:48:46.396164   10448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:48:46.406494   10448 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-084000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-084000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.708605166s)

                                                
                                                
-- stdout --
	* [old-k8s-version-084000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-084000" primary control-plane node in "old-k8s-version-084000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-084000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:48:48.723709   10561 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:48:48.723853   10561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:48.723856   10561 out.go:304] Setting ErrFile to fd 2...
	I0419 12:48:48.723858   10561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:48:48.724015   10561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:48:48.725148   10561 out.go:298] Setting JSON to false
	I0419 12:48:48.741959   10561 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6499,"bootTime":1713549629,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:48:48.742041   10561 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:48:48.746879   10561 out.go:177] * [old-k8s-version-084000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:48:48.754852   10561 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:48:48.758938   10561 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:48:48.754945   10561 notify.go:220] Checking for updates...
	I0419 12:48:48.764843   10561 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:48:48.767895   10561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:48:48.770888   10561 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:48:48.773903   10561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:48:48.777224   10561 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:48:48.777295   10561 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:48:48.777340   10561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:48:48.781942   10561 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:48:48.788836   10561 start.go:297] selected driver: qemu2
	I0419 12:48:48.788842   10561 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:48:48.788848   10561 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:48:48.790995   10561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:48:48.793903   10561 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:48:48.795485   10561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:48:48.795520   10561 cni.go:84] Creating CNI manager for ""
	I0419 12:48:48.795526   10561 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0419 12:48:48.795555   10561 start.go:340] cluster config:
	{Name:old-k8s-version-084000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:48:48.799920   10561 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:48:48.807753   10561 out.go:177] * Starting "old-k8s-version-084000" primary control-plane node in "old-k8s-version-084000" cluster
	I0419 12:48:48.811945   10561 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:48:48.811959   10561 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:48:48.811966   10561 cache.go:56] Caching tarball of preloaded images
	I0419 12:48:48.812020   10561 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:48:48.812025   10561 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0419 12:48:48.812079   10561 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/old-k8s-version-084000/config.json ...
	I0419 12:48:48.812091   10561 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/old-k8s-version-084000/config.json: {Name:mka4213e2eceb4376a7287278585493177a97a8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:48:48.812290   10561 start.go:360] acquireMachinesLock for old-k8s-version-084000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:48.812321   10561 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "old-k8s-version-084000"
	I0419 12:48:48.812332   10561 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:48.812359   10561 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:48.820905   10561 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:48:48.836381   10561 start.go:159] libmachine.API.Create for "old-k8s-version-084000" (driver="qemu2")
	I0419 12:48:48.836410   10561 client.go:168] LocalClient.Create starting
	I0419 12:48:48.836484   10561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:48.836522   10561 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:48.836530   10561 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:48.836590   10561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:48.836613   10561 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:48.836621   10561 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:48.837064   10561 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:48.977291   10561 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:49.019718   10561 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:49.019724   10561 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:49.019902   10561 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:48:49.032627   10561 main.go:141] libmachine: STDOUT: 
	I0419 12:48:49.032664   10561 main.go:141] libmachine: STDERR: 
	I0419 12:48:49.032717   10561 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2 +20000M
	I0419 12:48:49.043672   10561 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:49.043698   10561 main.go:141] libmachine: STDERR: 
	I0419 12:48:49.043730   10561 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:48:49.043734   10561 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:49.043770   10561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:04:ee:ec:32:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:48:49.045578   10561 main.go:141] libmachine: STDOUT: 
	I0419 12:48:49.045596   10561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:49.045618   10561 client.go:171] duration metric: took 209.207917ms to LocalClient.Create
	I0419 12:48:51.047672   10561 start.go:128] duration metric: took 2.2353525s to createHost
	I0419 12:48:51.047704   10561 start.go:83] releasing machines lock for "old-k8s-version-084000", held for 2.235423791s
	W0419 12:48:51.047742   10561 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:51.064387   10561 out.go:177] * Deleting "old-k8s-version-084000" in qemu2 ...
	W0419 12:48:51.077831   10561 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:51.077844   10561 start.go:728] Will try again in 5 seconds ...
	I0419 12:48:56.080084   10561 start.go:360] acquireMachinesLock for old-k8s-version-084000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:48:56.080464   10561 start.go:364] duration metric: took 279.542µs to acquireMachinesLock for "old-k8s-version-084000"
	I0419 12:48:56.080593   10561 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:48:56.080840   10561 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:48:56.088320   10561 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:48:56.131701   10561 start.go:159] libmachine.API.Create for "old-k8s-version-084000" (driver="qemu2")
	I0419 12:48:56.131768   10561 client.go:168] LocalClient.Create starting
	I0419 12:48:56.131885   10561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:48:56.131951   10561 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:56.131964   10561 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:56.132023   10561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:48:56.132068   10561 main.go:141] libmachine: Decoding PEM data...
	I0419 12:48:56.132084   10561 main.go:141] libmachine: Parsing certificate...
	I0419 12:48:56.132572   10561 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:48:56.276228   10561 main.go:141] libmachine: Creating SSH key...
	I0419 12:48:56.342032   10561 main.go:141] libmachine: Creating Disk image...
	I0419 12:48:56.342041   10561 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:48:56.342230   10561 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:48:56.355033   10561 main.go:141] libmachine: STDOUT: 
	I0419 12:48:56.355055   10561 main.go:141] libmachine: STDERR: 
	I0419 12:48:56.355107   10561 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2 +20000M
	I0419 12:48:56.366319   10561 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:48:56.366334   10561 main.go:141] libmachine: STDERR: 
	I0419 12:48:56.366345   10561 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:48:56.366348   10561 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:48:56.366392   10561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:43:50:a2:8b:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:48:56.368142   10561 main.go:141] libmachine: STDOUT: 
	I0419 12:48:56.368158   10561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:48:56.368182   10561 client.go:171] duration metric: took 236.413459ms to LocalClient.Create
	I0419 12:48:58.370302   10561 start.go:128] duration metric: took 2.289485458s to createHost
	I0419 12:48:58.370365   10561 start.go:83] releasing machines lock for "old-k8s-version-084000", held for 2.289934709s
	W0419 12:48:58.370706   10561 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:48:58.378087   10561 out.go:177] 
	W0419 12:48:58.382337   10561 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:48:58.382356   10561 out.go:239] * 
	* 
	W0419 12:48:58.383425   10561 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:48:58.392281   10561 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-084000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (40.63025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-084000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-084000 create -f testdata/busybox.yaml: exit status 1 (27.311792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-084000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-084000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (32.299291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (31.277083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-084000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-084000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-084000 describe deploy/metrics-server -n kube-system: exit status 1 (27.076708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-084000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-084000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (32.2335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-084000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-084000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.188012833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-084000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-084000" primary control-plane node in "old-k8s-version-084000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:01.837436   10613 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:01.837581   10613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:01.837585   10613 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:01.837587   10613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:01.837720   10613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:01.838725   10613 out.go:298] Setting JSON to false
	I0419 12:49:01.855180   10613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6512,"bootTime":1713549629,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:01.855246   10613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:01.860676   10613 out.go:177] * [old-k8s-version-084000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:01.867617   10613 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:01.871705   10613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:01.867743   10613 notify.go:220] Checking for updates...
	I0419 12:49:01.874677   10613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:01.877663   10613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:01.880693   10613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:01.883690   10613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:01.885536   10613 config.go:182] Loaded profile config "old-k8s-version-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0419 12:49:01.888679   10613 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0419 12:49:01.891729   10613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:01.896507   10613 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:49:01.903670   10613 start.go:297] selected driver: qemu2
	I0419 12:49:01.903677   10613 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:01.903740   10613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:01.905978   10613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:01.906017   10613 cni.go:84] Creating CNI manager for ""
	I0419 12:49:01.906024   10613 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0419 12:49:01.906045   10613 start.go:340] cluster config:
	{Name:old-k8s-version-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-084000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:01.910083   10613 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:01.918684   10613 out.go:177] * Starting "old-k8s-version-084000" primary control-plane node in "old-k8s-version-084000" cluster
	I0419 12:49:01.922712   10613 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:49:01.922725   10613 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:01.922732   10613 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:01.922787   10613 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:01.922792   10613 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0419 12:49:01.922851   10613 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/old-k8s-version-084000/config.json ...
	I0419 12:49:01.923412   10613 start.go:360] acquireMachinesLock for old-k8s-version-084000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:01.923444   10613 start.go:364] duration metric: took 25.041µs to acquireMachinesLock for "old-k8s-version-084000"
	I0419 12:49:01.923456   10613 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:01.923459   10613 fix.go:54] fixHost starting: 
	I0419 12:49:01.923574   10613 fix.go:112] recreateIfNeeded on old-k8s-version-084000: state=Stopped err=<nil>
	W0419 12:49:01.923582   10613 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:01.926764   10613 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-084000" ...
	I0419 12:49:01.934703   10613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:43:50:a2:8b:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:49:01.936718   10613 main.go:141] libmachine: STDOUT: 
	I0419 12:49:01.936738   10613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:01.936767   10613 fix.go:56] duration metric: took 13.3065ms for fixHost
	I0419 12:49:01.936771   10613 start.go:83] releasing machines lock for "old-k8s-version-084000", held for 13.323ms
	W0419 12:49:01.936777   10613 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:01.936814   10613 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:01.936819   10613 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:06.938882   10613 start.go:360] acquireMachinesLock for old-k8s-version-084000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:06.939418   10613 start.go:364] duration metric: took 400.708µs to acquireMachinesLock for "old-k8s-version-084000"
	I0419 12:49:06.939935   10613 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:06.939951   10613 fix.go:54] fixHost starting: 
	I0419 12:49:06.940509   10613 fix.go:112] recreateIfNeeded on old-k8s-version-084000: state=Stopped err=<nil>
	W0419 12:49:06.940527   10613 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:06.948690   10613 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-084000" ...
	I0419 12:49:06.952939   10613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:43:50:a2:8b:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/old-k8s-version-084000/disk.qcow2
	I0419 12:49:06.960330   10613 main.go:141] libmachine: STDOUT: 
	I0419 12:49:06.960381   10613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:06.960447   10613 fix.go:56] duration metric: took 20.495416ms for fixHost
	I0419 12:49:06.960463   10613 start.go:83] releasing machines lock for "old-k8s-version-084000", held for 20.999625ms
	W0419 12:49:06.960643   10613 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:06.968731   10613 out.go:177] 
	W0419 12:49:06.972884   10613 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:06.972913   10613 out.go:239] * 
	* 
	W0419 12:49:06.974531   10613 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:06.983856   10613 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-084000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (55.926416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-084000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (33.011667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-084000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-084000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-084000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.780583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-084000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-084000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (32.162167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-084000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (31.970416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-084000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-084000 --alsologtostderr -v=1: exit status 83 (44.513542ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-084000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-084000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:07.249896   10637 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:07.250908   10637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:07.250912   10637 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:07.250915   10637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:07.251072   10637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:07.251286   10637 out.go:298] Setting JSON to false
	I0419 12:49:07.251295   10637 mustload.go:65] Loading cluster: old-k8s-version-084000
	I0419 12:49:07.251491   10637 config.go:182] Loaded profile config "old-k8s-version-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0419 12:49:07.255240   10637 out.go:177] * The control-plane node old-k8s-version-084000 host is not running: state=Stopped
	I0419 12:49:07.259159   10637 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-084000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-084000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (31.208042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (30.874958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-289000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-289000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.948351875s)

                                                
                                                
-- stdout --
	* [no-preload-289000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-289000" primary control-plane node in "no-preload-289000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:07.715409   10660 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:07.715537   10660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:07.715540   10660 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:07.715543   10660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:07.715670   10660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:07.716761   10660 out.go:298] Setting JSON to false
	I0419 12:49:07.733152   10660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6518,"bootTime":1713549629,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:07.733220   10660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:07.738067   10660 out.go:177] * [no-preload-289000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:07.745043   10660 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:07.749031   10660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:07.745097   10660 notify.go:220] Checking for updates...
	I0419 12:49:07.755033   10660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:07.758014   10660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:07.761096   10660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:07.763983   10660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:07.767397   10660 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:07.767458   10660 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:49:07.767508   10660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:07.772025   10660 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:49:07.778995   10660 start.go:297] selected driver: qemu2
	I0419 12:49:07.779002   10660 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:49:07.779008   10660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:07.781302   10660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:49:07.785096   10660 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:49:07.788181   10660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:07.788224   10660 cni.go:84] Creating CNI manager for ""
	I0419 12:49:07.788231   10660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:07.788235   10660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:49:07.788261   10660 start.go:340] cluster config:
	{Name:no-preload-289000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:07.792651   10660 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.801094   10660 out.go:177] * Starting "no-preload-289000" primary control-plane node in "no-preload-289000" cluster
	I0419 12:49:07.804997   10660 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:07.805052   10660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/no-preload-289000/config.json ...
	I0419 12:49:07.805066   10660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/no-preload-289000/config.json: {Name:mk2f9a828c5fc17db12e1ff51f615733e37e49a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:49:07.805069   10660 cache.go:107] acquiring lock: {Name:mke0d297b5bc4c0575347e0b88640504e7dc748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805080   10660 cache.go:107] acquiring lock: {Name:mkc00ed9b00b809cda422a0ee201d9541861ad63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805082   10660 cache.go:107] acquiring lock: {Name:mk7cb3366c1d2650e7973b23e5e1e4d782802e75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805130   10660 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0419 12:49:07.805135   10660 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.333µs
	I0419 12:49:07.805141   10660 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0419 12:49:07.805147   10660 cache.go:107] acquiring lock: {Name:mka65144489002e8b83bc08071d7c2562e7809dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805247   10660 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 12:49:07.805240   10660 cache.go:107] acquiring lock: {Name:mkdaf56968ca07a964b4e2846a4cf15acb16d225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805278   10660 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 12:49:07.805289   10660 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 12:49:07.805307   10660 cache.go:107] acquiring lock: {Name:mk835459b84546bfd8eafd0194c143529dedd85f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805338   10660 start.go:360] acquireMachinesLock for no-preload-289000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:07.805334   10660 cache.go:107] acquiring lock: {Name:mkb48f07a981d72c89fdbbbf3110075104ed90b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805373   10660 cache.go:107] acquiring lock: {Name:mk7df37e4ac45a7997671a4a5fe6003e90f466a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:07.805402   10660 start.go:364] duration metric: took 54.458µs to acquireMachinesLock for "no-preload-289000"
	I0419 12:49:07.805466   10660 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 12:49:07.805437   10660 start.go:93] Provisioning new machine with config: &{Name:no-preload-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:07.805483   10660 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:07.814030   10660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:07.805580   10660 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0419 12:49:07.805590   10660 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 12:49:07.805612   10660 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0419 12:49:07.819444   10660 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0419 12:49:07.819551   10660 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0419 12:49:07.820230   10660 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0419 12:49:07.820300   10660 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0419 12:49:07.822624   10660 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0419 12:49:07.822673   10660 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0419 12:49:07.822765   10660 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0419 12:49:07.830921   10660 start.go:159] libmachine.API.Create for "no-preload-289000" (driver="qemu2")
	I0419 12:49:07.830954   10660 client.go:168] LocalClient.Create starting
	I0419 12:49:07.831022   10660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:07.831051   10660 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:07.831060   10660 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:07.831100   10660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:07.831127   10660 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:07.831136   10660 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:07.831468   10660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:07.971367   10660 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:08.157667   10660 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:08.157688   10660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:08.157891   10660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:08.170954   10660 main.go:141] libmachine: STDOUT: 
	I0419 12:49:08.170972   10660 main.go:141] libmachine: STDERR: 
	I0419 12:49:08.171014   10660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2 +20000M
	I0419 12:49:08.182933   10660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:08.182955   10660 main.go:141] libmachine: STDERR: 
	I0419 12:49:08.182969   10660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:08.182972   10660 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:08.183005   10660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:47:3c:4d:01:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:08.185008   10660 main.go:141] libmachine: STDOUT: 
	I0419 12:49:08.185024   10660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:08.185041   10660 client.go:171] duration metric: took 354.0885ms to LocalClient.Create
	I0419 12:49:08.227989   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0419 12:49:08.253016   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0419 12:49:08.269058   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0
	I0419 12:49:08.273067   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0
	I0419 12:49:08.274946   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0
	I0419 12:49:08.287307   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0419 12:49:08.312451   10660 cache.go:162] opening:  /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0419 12:49:08.431270   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0419 12:49:08.431289   10660 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 626.050916ms
	I0419 12:49:08.431296   10660 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0419 12:49:10.185174   10660 start.go:128] duration metric: took 2.379731s to createHost
	I0419 12:49:10.185196   10660 start.go:83] releasing machines lock for "no-preload-289000", held for 2.379837458s
	W0419 12:49:10.185232   10660 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:10.194369   10660 out.go:177] * Deleting "no-preload-289000" in qemu2 ...
	W0419 12:49:10.213556   10660 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:10.213602   10660 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:10.792404   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0419 12:49:10.792467   10660 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.987380291s
	I0419 12:49:10.792504   10660 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0419 12:49:11.139987   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0419 12:49:11.140011   10660 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 3.334769167s
	I0419 12:49:11.140028   10660 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0419 12:49:11.519156   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0419 12:49:11.519207   10660 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 3.714207375s
	I0419 12:49:11.519223   10660 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0419 12:49:12.512100   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0419 12:49:12.512131   10660 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 4.706988416s
	I0419 12:49:12.512149   10660 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0419 12:49:12.589921   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0419 12:49:12.589933   10660 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 4.784968542s
	I0419 12:49:12.589941   10660 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0419 12:49:15.214490   10660 start.go:360] acquireMachinesLock for no-preload-289000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:15.214754   10660 start.go:364] duration metric: took 222.917µs to acquireMachinesLock for "no-preload-289000"
	I0419 12:49:15.214826   10660 start.go:93] Provisioning new machine with config: &{Name:no-preload-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:15.214947   10660 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:15.224319   10660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:15.256295   10660 start.go:159] libmachine.API.Create for "no-preload-289000" (driver="qemu2")
	I0419 12:49:15.256333   10660 client.go:168] LocalClient.Create starting
	I0419 12:49:15.256436   10660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:15.256496   10660 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:15.256515   10660 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:15.256576   10660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:15.256610   10660 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:15.256621   10660 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:15.257043   10660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:15.401185   10660 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:15.565952   10660 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:15.565960   10660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:15.566167   10660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:15.579387   10660 main.go:141] libmachine: STDOUT: 
	I0419 12:49:15.579408   10660 main.go:141] libmachine: STDERR: 
	I0419 12:49:15.579482   10660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2 +20000M
	I0419 12:49:15.590790   10660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:15.590808   10660 main.go:141] libmachine: STDERR: 
	I0419 12:49:15.590828   10660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:15.590834   10660 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:15.590882   10660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b8:64:52:14:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:15.592724   10660 main.go:141] libmachine: STDOUT: 
	I0419 12:49:15.592738   10660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:15.592752   10660 client.go:171] duration metric: took 336.423084ms to LocalClient.Create
	I0419 12:49:15.775700   10660 cache.go:157] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0419 12:49:15.775719   10660 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.97068375s
	I0419 12:49:15.775725   10660 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0419 12:49:15.775738   10660 cache.go:87] Successfully saved all images to host disk.
	I0419 12:49:17.594906   10660 start.go:128] duration metric: took 2.379984s to createHost
	I0419 12:49:17.594963   10660 start.go:83] releasing machines lock for "no-preload-289000", held for 2.380246042s
	W0419 12:49:17.595309   10660 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:17.603786   10660 out.go:177] 
	W0419 12:49:17.606943   10660 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:17.606963   10660 out.go:239] * 
	* 
	W0419 12:49:17.608988   10660 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:17.617900   10660 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-289000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (64.192458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-289000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-289000 create -f testdata/busybox.yaml: exit status 1 (28.330875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-289000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-289000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (32.115959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (31.783084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-289000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-289000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-289000 describe deploy/metrics-server -n kube-system: exit status 1 (26.81225ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-289000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-289000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (31.635292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-289000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-289000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.170697125s)

                                                
                                                
-- stdout --
	* [no-preload-289000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-289000" primary control-plane node in "no-preload-289000" cluster
	* Restarting existing qemu2 VM for "no-preload-289000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-289000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:20.060256   10734 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:20.060384   10734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:20.060387   10734 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:20.060390   10734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:20.060524   10734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:20.061536   10734 out.go:298] Setting JSON to false
	I0419 12:49:20.077716   10734 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6531,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:20.077780   10734 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:20.082203   10734 out.go:177] * [no-preload-289000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:20.088194   10734 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:20.088257   10734 notify.go:220] Checking for updates...
	I0419 12:49:20.091178   10734 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:20.095151   10734 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:20.098254   10734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:20.101055   10734 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:20.104151   10734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:20.107469   10734 config.go:182] Loaded profile config "no-preload-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:20.107734   10734 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:20.111032   10734 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:49:20.119188   10734 start.go:297] selected driver: qemu2
	I0419 12:49:20.119197   10734 start.go:901] validating driver "qemu2" against &{Name:no-preload-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:no-preload-289000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:20.119286   10734 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:20.121576   10734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:20.121620   10734 cni.go:84] Creating CNI manager for ""
	I0419 12:49:20.121627   10734 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:20.121647   10734 start.go:340] cluster config:
	{Name:no-preload-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-289000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:20.126086   10734 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.134087   10734 out.go:177] * Starting "no-preload-289000" primary control-plane node in "no-preload-289000" cluster
	I0419 12:49:20.137929   10734 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:20.137982   10734 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/no-preload-289000/config.json ...
	I0419 12:49:20.138034   10734 cache.go:107] acquiring lock: {Name:mke0d297b5bc4c0575347e0b88640504e7dc748f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138083   10734 cache.go:107] acquiring lock: {Name:mkc00ed9b00b809cda422a0ee201d9541861ad63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138095   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0419 12:49:20.138102   10734 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.709µs
	I0419 12:49:20.138117   10734 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0419 12:49:20.138102   10734 cache.go:107] acquiring lock: {Name:mk7df37e4ac45a7997671a4a5fe6003e90f466a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138126   10734 cache.go:107] acquiring lock: {Name:mk7cb3366c1d2650e7973b23e5e1e4d782802e75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138139   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0419 12:49:20.138143   10734 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 75.792µs
	I0419 12:49:20.138147   10734 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0419 12:49:20.138161   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0419 12:49:20.138165   10734 cache.go:107] acquiring lock: {Name:mka65144489002e8b83bc08071d7c2562e7809dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138173   10734 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 47.042µs
	I0419 12:49:20.138174   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0419 12:49:20.138178   10734 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0419 12:49:20.138180   10734 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 101.208µs
	I0419 12:49:20.138187   10734 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0419 12:49:20.138184   10734 cache.go:107] acquiring lock: {Name:mkb48f07a981d72c89fdbbbf3110075104ed90b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138203   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0419 12:49:20.138199   10734 cache.go:107] acquiring lock: {Name:mkdaf56968ca07a964b4e2846a4cf15acb16d225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138207   10734 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 42.416µs
	I0419 12:49:20.138215   10734 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0419 12:49:20.138205   10734 cache.go:107] acquiring lock: {Name:mk835459b84546bfd8eafd0194c143529dedd85f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:20.138246   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0419 12:49:20.138250   10734 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 68.958µs
	I0419 12:49:20.138253   10734 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0419 12:49:20.138280   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0419 12:49:20.138290   10734 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 132.125µs
	I0419 12:49:20.138293   10734 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0419 12:49:20.138285   10734 cache.go:115] /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0419 12:49:20.138319   10734 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 132.667µs
	I0419 12:49:20.138324   10734 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0419 12:49:20.138330   10734 cache.go:87] Successfully saved all images to host disk.
	I0419 12:49:20.138430   10734 start.go:360] acquireMachinesLock for no-preload-289000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:20.138460   10734 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "no-preload-289000"
	I0419 12:49:20.138469   10734 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:20.138476   10734 fix.go:54] fixHost starting: 
	I0419 12:49:20.138579   10734 fix.go:112] recreateIfNeeded on no-preload-289000: state=Stopped err=<nil>
	W0419 12:49:20.138587   10734 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:20.145978   10734 out.go:177] * Restarting existing qemu2 VM for "no-preload-289000" ...
	I0419 12:49:20.150165   10734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b8:64:52:14:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:20.152197   10734 main.go:141] libmachine: STDOUT: 
	I0419 12:49:20.152218   10734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:20.152246   10734 fix.go:56] duration metric: took 13.771042ms for fixHost
	I0419 12:49:20.152251   10734 start.go:83] releasing machines lock for "no-preload-289000", held for 13.787041ms
	W0419 12:49:20.152256   10734 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:20.152283   10734 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:20.152287   10734 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:25.154303   10734 start.go:360] acquireMachinesLock for no-preload-289000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:25.154476   10734 start.go:364] duration metric: took 132.916µs to acquireMachinesLock for "no-preload-289000"
	I0419 12:49:25.154504   10734 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:25.154510   10734 fix.go:54] fixHost starting: 
	I0419 12:49:25.154756   10734 fix.go:112] recreateIfNeeded on no-preload-289000: state=Stopped err=<nil>
	W0419 12:49:25.154766   10734 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:25.159127   10734 out.go:177] * Restarting existing qemu2 VM for "no-preload-289000" ...
	I0419 12:49:25.166073   10734 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b8:64:52:14:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/no-preload-289000/disk.qcow2
	I0419 12:49:25.169540   10734 main.go:141] libmachine: STDOUT: 
	I0419 12:49:25.169577   10734 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:25.169605   10734 fix.go:56] duration metric: took 15.09625ms for fixHost
	I0419 12:49:25.169612   10734 start.go:83] releasing machines lock for "no-preload-289000", held for 15.124833ms
	W0419 12:49:25.169698   10734 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-289000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-289000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:25.177914   10734 out.go:177] 
	W0419 12:49:25.181062   10734 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:25.181076   10734 out.go:239] * 
	* 
	W0419 12:49:25.181726   10734 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:25.192030   10734 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-289000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (40.762125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-289000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (32.123875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-289000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-289000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-289000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.166459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-289000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-289000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (31.86875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-289000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (31.910542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-289000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-289000 --alsologtostderr -v=1: exit status 83 (43.065917ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-289000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-289000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:25.439000   10753 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:25.439179   10753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:25.439187   10753 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:25.439189   10753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:25.439322   10753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:25.439567   10753 out.go:298] Setting JSON to false
	I0419 12:49:25.439576   10753 mustload.go:65] Loading cluster: no-preload-289000
	I0419 12:49:25.439781   10753 config.go:182] Loaded profile config "no-preload-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:25.443592   10753 out.go:177] * The control-plane node no-preload-289000 host is not running: state=Stopped
	I0419 12:49:25.447573   10753 out.go:177]   To start a cluster, run: "minikube start -p no-preload-289000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-289000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (31.746125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (31.759458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (11.375104375s)

                                                
                                                
-- stdout --
	* [embed-certs-918000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-918000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:25.918143   10776 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:25.918295   10776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:25.918299   10776 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:25.918301   10776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:25.918430   10776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:25.919456   10776 out.go:298] Setting JSON to false
	I0419 12:49:25.935888   10776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6536,"bootTime":1713549629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:25.935956   10776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:25.940777   10776 out.go:177] * [embed-certs-918000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:25.946792   10776 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:25.950761   10776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:25.946879   10776 notify.go:220] Checking for updates...
	I0419 12:49:25.955254   10776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:25.958835   10776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:25.961753   10776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:25.964776   10776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:25.968016   10776 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:25.968073   10776 config.go:182] Loaded profile config "stopped-upgrade-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0419 12:49:25.968120   10776 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:25.972785   10776 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:49:25.979663   10776 start.go:297] selected driver: qemu2
	I0419 12:49:25.979670   10776 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:49:25.979675   10776 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:25.981957   10776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:49:25.984799   10776 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:49:25.987805   10776 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:25.987837   10776 cni.go:84] Creating CNI manager for ""
	I0419 12:49:25.987845   10776 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:25.987850   10776 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:49:25.987874   10776 start.go:340] cluster config:
	{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:25.992329   10776 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:26.000771   10776 out.go:177] * Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	I0419 12:49:26.003622   10776 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:26.003633   10776 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:26.003638   10776 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:26.003685   10776 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:26.003690   10776 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:49:26.003739   10776 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/embed-certs-918000/config.json ...
	I0419 12:49:26.003750   10776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/embed-certs-918000/config.json: {Name:mka9e237e21cc3dcd3661ce7654f5e648979b69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:49:26.003961   10776 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:26.003990   10776 start.go:364] duration metric: took 23.5µs to acquireMachinesLock for "embed-certs-918000"
	I0419 12:49:26.004002   10776 start.go:93] Provisioning new machine with config: &{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:26.004028   10776 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:26.010729   10776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:26.026529   10776 start.go:159] libmachine.API.Create for "embed-certs-918000" (driver="qemu2")
	I0419 12:49:26.026559   10776 client.go:168] LocalClient.Create starting
	I0419 12:49:26.026623   10776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:26.026653   10776 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:26.026660   10776 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:26.026711   10776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:26.026734   10776 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:26.026741   10776 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:26.027138   10776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:26.165205   10776 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:26.294654   10776 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:26.294663   10776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:26.294868   10776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:26.307767   10776 main.go:141] libmachine: STDOUT: 
	I0419 12:49:26.307798   10776 main.go:141] libmachine: STDERR: 
	I0419 12:49:26.307854   10776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2 +20000M
	I0419 12:49:26.318880   10776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:26.318908   10776 main.go:141] libmachine: STDERR: 
	I0419 12:49:26.318924   10776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:26.318928   10776 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:26.318960   10776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:96:7d:66:e0:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:26.320748   10776 main.go:141] libmachine: STDOUT: 
	I0419 12:49:26.320765   10776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:26.320784   10776 client.go:171] duration metric: took 294.225125ms to LocalClient.Create
	I0419 12:49:28.322997   10776 start.go:128] duration metric: took 2.318982375s to createHost
	I0419 12:49:28.323095   10776 start.go:83] releasing machines lock for "embed-certs-918000", held for 2.319147625s
	W0419 12:49:28.323150   10776 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:28.334572   10776 out.go:177] * Deleting "embed-certs-918000" in qemu2 ...
	W0419 12:49:28.361365   10776 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:28.361394   10776 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:33.363448   10776 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:34.819276   10776 start.go:364] duration metric: took 1.455774333s to acquireMachinesLock for "embed-certs-918000"
	I0419 12:49:34.819394   10776 start.go:93] Provisioning new machine with config: &{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:34.819672   10776 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:34.830229   10776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:34.881874   10776 start.go:159] libmachine.API.Create for "embed-certs-918000" (driver="qemu2")
	I0419 12:49:34.881948   10776 client.go:168] LocalClient.Create starting
	I0419 12:49:34.882142   10776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:34.882208   10776 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:34.882222   10776 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:34.882281   10776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:34.882332   10776 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:34.882344   10776 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:34.883020   10776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:35.030691   10776 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:35.187429   10776 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:35.187438   10776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:35.187618   10776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:35.200717   10776 main.go:141] libmachine: STDOUT: 
	I0419 12:49:35.200748   10776 main.go:141] libmachine: STDERR: 
	I0419 12:49:35.200803   10776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2 +20000M
	I0419 12:49:35.211800   10776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:35.211824   10776 main.go:141] libmachine: STDERR: 
	I0419 12:49:35.211836   10776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:35.211841   10776 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:35.211877   10776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:49:98:60:ef:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:35.213551   10776 main.go:141] libmachine: STDOUT: 
	I0419 12:49:35.213571   10776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:35.213587   10776 client.go:171] duration metric: took 331.62425ms to LocalClient.Create
	I0419 12:49:37.215196   10776 start.go:128] duration metric: took 2.395535708s to createHost
	I0419 12:49:37.215268   10776 start.go:83] releasing machines lock for "embed-certs-918000", held for 2.395986083s
	W0419 12:49:37.215592   10776 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:37.220213   10776 out.go:177] 
	W0419 12:49:37.235315   10776 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:37.235342   10776 out.go:239] * 
	* 
	W0419 12:49:37.238009   10776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:37.247190   10776 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (66.538417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-125000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-125000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.715181166s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-125000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-125000" primary control-plane node in "default-k8s-diff-port-125000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-125000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:32.476712   10802 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:32.476858   10802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:32.476861   10802 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:32.476864   10802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:32.476994   10802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:32.478178   10802 out.go:298] Setting JSON to false
	I0419 12:49:32.494473   10802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6543,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:32.494540   10802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:32.498932   10802 out.go:177] * [default-k8s-diff-port-125000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:32.506875   10802 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:32.509838   10802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:32.506913   10802 notify.go:220] Checking for updates...
	I0419 12:49:32.516704   10802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:32.519816   10802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:32.522901   10802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:32.525831   10802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:32.529232   10802 config.go:182] Loaded profile config "embed-certs-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:32.529292   10802 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:32.529338   10802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:32.533803   10802 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:49:32.540834   10802 start.go:297] selected driver: qemu2
	I0419 12:49:32.540840   10802 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:49:32.540846   10802 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:32.543076   10802 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:49:32.547806   10802 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:49:32.550937   10802 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:32.550982   10802 cni.go:84] Creating CNI manager for ""
	I0419 12:49:32.550989   10802 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:32.550997   10802 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:49:32.551027   10802 start.go:340] cluster config:
	{Name:default-k8s-diff-port-125000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-125000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:32.555601   10802 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:32.568867   10802 out.go:177] * Starting "default-k8s-diff-port-125000" primary control-plane node in "default-k8s-diff-port-125000" cluster
	I0419 12:49:32.571852   10802 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:32.571865   10802 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:32.571869   10802 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:32.571936   10802 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:32.571941   10802 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:49:32.571999   10802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/default-k8s-diff-port-125000/config.json ...
	I0419 12:49:32.572010   10802 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/default-k8s-diff-port-125000/config.json: {Name:mk3d4dfd110659861d9b9fe26822daf8b05afccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:49:32.572243   10802 start.go:360] acquireMachinesLock for default-k8s-diff-port-125000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:32.572281   10802 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "default-k8s-diff-port-125000"
	I0419 12:49:32.572293   10802 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-125000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-125000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:32.572320   10802 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:32.579893   10802 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:32.597727   10802 start.go:159] libmachine.API.Create for "default-k8s-diff-port-125000" (driver="qemu2")
	I0419 12:49:32.597755   10802 client.go:168] LocalClient.Create starting
	I0419 12:49:32.597821   10802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:32.597851   10802 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:32.597861   10802 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:32.597902   10802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:32.597927   10802 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:32.597935   10802 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:32.598309   10802 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:32.726833   10802 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:32.791657   10802 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:32.791662   10802 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:32.791834   10802 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:32.804123   10802 main.go:141] libmachine: STDOUT: 
	I0419 12:49:32.804150   10802 main.go:141] libmachine: STDERR: 
	I0419 12:49:32.804200   10802 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2 +20000M
	I0419 12:49:32.815089   10802 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:32.815106   10802 main.go:141] libmachine: STDERR: 
	I0419 12:49:32.815123   10802 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:32.815128   10802 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:32.815156   10802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:f7:99:1c:53:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:32.816789   10802 main.go:141] libmachine: STDOUT: 
	I0419 12:49:32.816809   10802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:32.816828   10802 client.go:171] duration metric: took 219.072666ms to LocalClient.Create
	I0419 12:49:34.819013   10802 start.go:128] duration metric: took 2.246719125s to createHost
	I0419 12:49:34.819091   10802 start.go:83] releasing machines lock for "default-k8s-diff-port-125000", held for 2.246850666s
	W0419 12:49:34.819193   10802 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:34.840332   10802 out.go:177] * Deleting "default-k8s-diff-port-125000" in qemu2 ...
	W0419 12:49:34.859009   10802 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:34.859030   10802 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:39.861021   10802 start.go:360] acquireMachinesLock for default-k8s-diff-port-125000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:39.861155   10802 start.go:364] duration metric: took 104.666µs to acquireMachinesLock for "default-k8s-diff-port-125000"
	I0419 12:49:39.861174   10802 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-125000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-125000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:39.861254   10802 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:39.869423   10802 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:39.898630   10802 start.go:159] libmachine.API.Create for "default-k8s-diff-port-125000" (driver="qemu2")
	I0419 12:49:39.898674   10802 client.go:168] LocalClient.Create starting
	I0419 12:49:39.898756   10802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:39.898810   10802 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:39.898830   10802 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:39.898885   10802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:39.898910   10802 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:39.898920   10802 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:39.899325   10802 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:40.027605   10802 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:40.090987   10802 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:40.090992   10802 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:40.091180   10802 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:40.103557   10802 main.go:141] libmachine: STDOUT: 
	I0419 12:49:40.103578   10802 main.go:141] libmachine: STDERR: 
	I0419 12:49:40.103649   10802 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2 +20000M
	I0419 12:49:40.114569   10802 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:40.114591   10802 main.go:141] libmachine: STDERR: 
	I0419 12:49:40.114603   10802 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:40.114607   10802 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:40.114643   10802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:cc:0b:db:df:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:40.116456   10802 main.go:141] libmachine: STDOUT: 
	I0419 12:49:40.116471   10802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:40.116481   10802 client.go:171] duration metric: took 217.806708ms to LocalClient.Create
	I0419 12:49:42.118613   10802 start.go:128] duration metric: took 2.257383875s to createHost
	I0419 12:49:42.118667   10802 start.go:83] releasing machines lock for "default-k8s-diff-port-125000", held for 2.25754925s
	W0419 12:49:42.119091   10802 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-125000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-125000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:42.130761   10802 out.go:177] 
	W0419 12:49:42.133729   10802 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:42.133836   10802 out.go:239] * 
	* 
	W0419 12:49:42.136449   10802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:42.146733   10802 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-125000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (66.295375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-918000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-918000 create -f testdata/busybox.yaml: exit status 1 (29.229959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-918000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (31.46725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (31.6945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-918000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-918000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-918000 describe deploy/metrics-server -n kube-system: exit status 1 (27.2375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-918000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (30.999875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.199179291s)

                                                
                                                
-- stdout --
	* [embed-certs-918000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	* Restarting existing qemu2 VM for "embed-certs-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-918000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:39.676389   10844 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:39.676553   10844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:39.676556   10844 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:39.676558   10844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:39.676703   10844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:39.677701   10844 out.go:298] Setting JSON to false
	I0419 12:49:39.693812   10844 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6550,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:39.693883   10844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:39.699030   10844 out.go:177] * [embed-certs-918000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:39.706008   10844 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:39.706043   10844 notify.go:220] Checking for updates...
	I0419 12:49:39.713014   10844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:39.720956   10844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:39.724018   10844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:39.726949   10844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:39.731133   10844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:39.734385   10844 config.go:182] Loaded profile config "embed-certs-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:39.734665   10844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:39.738988   10844 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:49:39.746020   10844 start.go:297] selected driver: qemu2
	I0419 12:49:39.746028   10844 start.go:901] validating driver "qemu2" against &{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:embed-certs-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:39.746076   10844 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:39.748505   10844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:39.748549   10844 cni.go:84] Creating CNI manager for ""
	I0419 12:49:39.748557   10844 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:39.748585   10844 start.go:340] cluster config:
	{Name:embed-certs-918000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-918000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:39.753010   10844 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:39.761016   10844 out.go:177] * Starting "embed-certs-918000" primary control-plane node in "embed-certs-918000" cluster
	I0419 12:49:39.764790   10844 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:39.764802   10844 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:39.764810   10844 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:39.764864   10844 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:39.764870   10844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:49:39.764929   10844 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/embed-certs-918000/config.json ...
	I0419 12:49:39.765451   10844 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:39.765485   10844 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "embed-certs-918000"
	I0419 12:49:39.765495   10844 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:39.765501   10844 fix.go:54] fixHost starting: 
	I0419 12:49:39.765615   10844 fix.go:112] recreateIfNeeded on embed-certs-918000: state=Stopped err=<nil>
	W0419 12:49:39.765624   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:39.773991   10844 out.go:177] * Restarting existing qemu2 VM for "embed-certs-918000" ...
	I0419 12:49:39.777951   10844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:49:98:60:ef:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:39.780070   10844 main.go:141] libmachine: STDOUT: 
	I0419 12:49:39.780090   10844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:39.780120   10844 fix.go:56] duration metric: took 14.619209ms for fixHost
	I0419 12:49:39.780126   10844 start.go:83] releasing machines lock for "embed-certs-918000", held for 14.636417ms
	W0419 12:49:39.780131   10844 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:39.780163   10844 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:39.780168   10844 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:44.782237   10844 start.go:360] acquireMachinesLock for embed-certs-918000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:44.782474   10844 start.go:364] duration metric: took 166.125µs to acquireMachinesLock for "embed-certs-918000"
	I0419 12:49:44.782530   10844 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:44.782542   10844 fix.go:54] fixHost starting: 
	I0419 12:49:44.782996   10844 fix.go:112] recreateIfNeeded on embed-certs-918000: state=Stopped err=<nil>
	W0419 12:49:44.783011   10844 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:44.791496   10844 out.go:177] * Restarting existing qemu2 VM for "embed-certs-918000" ...
	I0419 12:49:44.795669   10844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:49:98:60:ef:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/embed-certs-918000/disk.qcow2
	I0419 12:49:44.805094   10844 main.go:141] libmachine: STDOUT: 
	I0419 12:49:44.805173   10844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:44.805294   10844 fix.go:56] duration metric: took 22.747458ms for fixHost
	I0419 12:49:44.805324   10844 start.go:83] releasing machines lock for "embed-certs-918000", held for 22.830084ms
	W0419 12:49:44.805544   10844 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-918000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:44.814555   10844 out.go:177] 
	W0419 12:49:44.818500   10844 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:44.818563   10844 out.go:239] * 
	* 
	W0419 12:49:44.821270   10844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:44.830465   10844 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-918000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (68.185875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-125000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125000 create -f testdata/busybox.yaml: exit status 1 (29.133ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-125000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-125000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (31.313459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (30.713958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-125000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-125000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125000 describe deploy/metrics-server -n kube-system: exit status 1 (26.774167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-125000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-125000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (31.238458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-918000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (33.55125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-918000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.257208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-918000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-918000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (31.693208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-918000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (30.475583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-918000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-918000 --alsologtostderr -v=1: exit status 83 (43.238584ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-918000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-918000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:45.112051   10901 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:45.112220   10901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:45.112223   10901 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:45.112225   10901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:45.112341   10901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:45.112591   10901 out.go:298] Setting JSON to false
	I0419 12:49:45.112599   10901 mustload.go:65] Loading cluster: embed-certs-918000
	I0419 12:49:45.112784   10901 config.go:182] Loaded profile config "embed-certs-918000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:45.116757   10901 out.go:177] * The control-plane node embed-certs-918000 host is not running: state=Stopped
	I0419 12:49:45.120573   10901 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-918000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-918000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (31.525667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (30.763458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-918000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-603000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-603000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.98944225s)

                                                
                                                
-- stdout --
	* [newest-cni-603000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-603000" primary control-plane node in "newest-cni-603000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-603000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:45.608866   10930 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:45.609011   10930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:45.609014   10930 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:45.609017   10930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:45.609143   10930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:45.610546   10930 out.go:298] Setting JSON to false
	I0419 12:49:45.628204   10930 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6556,"bootTime":1713549629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:45.628279   10930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:45.632536   10930 out.go:177] * [newest-cni-603000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:45.644506   10930 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:45.639646   10930 notify.go:220] Checking for updates...
	I0419 12:49:45.658635   10930 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:45.668554   10930 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:45.674467   10930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:45.680614   10930 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:45.688625   10930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:45.691923   10930 config.go:182] Loaded profile config "default-k8s-diff-port-125000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:45.691984   10930 config.go:182] Loaded profile config "multinode-926000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:45.692030   10930 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:45.697565   10930 out.go:177] * Using the qemu2 driver based on user configuration
	I0419 12:49:45.707594   10930 start.go:297] selected driver: qemu2
	I0419 12:49:45.707599   10930 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:49:45.707606   10930 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:45.710094   10930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0419 12:49:45.710126   10930 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0419 12:49:45.719582   10930 out.go:177] * Automatically selected the socket_vmnet network
	I0419 12:49:45.727751   10930 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0419 12:49:45.727788   10930 cni.go:84] Creating CNI manager for ""
	I0419 12:49:45.727795   10930 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:45.727799   10930 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:49:45.727844   10930 start.go:340] cluster config:
	{Name:newest-cni-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:45.732150   10930 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:45.739359   10930 out.go:177] * Starting "newest-cni-603000" primary control-plane node in "newest-cni-603000" cluster
	I0419 12:49:45.751617   10930 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:45.751633   10930 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:45.751641   10930 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:45.751710   10930 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:45.751715   10930 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:49:45.751780   10930 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/newest-cni-603000/config.json ...
	I0419 12:49:45.751794   10930 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/newest-cni-603000/config.json: {Name:mkcecaf0e751c7102d52baf1e10a8808001473f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:49:45.752205   10930 start.go:360] acquireMachinesLock for newest-cni-603000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:45.757881   10930 start.go:364] duration metric: took 5.669ms to acquireMachinesLock for "newest-cni-603000"
	I0419 12:49:45.757896   10930 start.go:93] Provisioning new machine with config: &{Name:newest-cni-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:45.757948   10930 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:45.768466   10930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:45.787437   10930 start.go:159] libmachine.API.Create for "newest-cni-603000" (driver="qemu2")
	I0419 12:49:45.787466   10930 client.go:168] LocalClient.Create starting
	I0419 12:49:45.787529   10930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:45.787564   10930 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:45.787574   10930 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:45.787614   10930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:45.787639   10930 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:45.787647   10930 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:45.788009   10930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:45.926194   10930 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:46.134236   10930 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:46.134243   10930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:46.134444   10930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:46.147458   10930 main.go:141] libmachine: STDOUT: 
	I0419 12:49:46.147480   10930 main.go:141] libmachine: STDERR: 
	I0419 12:49:46.147550   10930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2 +20000M
	I0419 12:49:46.158621   10930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:46.158639   10930 main.go:141] libmachine: STDERR: 
	I0419 12:49:46.158659   10930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:46.158663   10930 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:46.158696   10930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:0c:e1:7f:b3:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:46.160414   10930 main.go:141] libmachine: STDOUT: 
	I0419 12:49:46.160431   10930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:46.160451   10930 client.go:171] duration metric: took 372.989333ms to LocalClient.Create
	I0419 12:49:48.162584   10930 start.go:128] duration metric: took 2.40466675s to createHost
	I0419 12:49:48.162761   10930 start.go:83] releasing machines lock for "newest-cni-603000", held for 2.404800541s
	W0419 12:49:48.162851   10930 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:48.169243   10930 out.go:177] * Deleting "newest-cni-603000" in qemu2 ...
	W0419 12:49:48.196717   10930 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:48.196740   10930 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:53.198888   10930 start.go:360] acquireMachinesLock for newest-cni-603000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:53.199441   10930 start.go:364] duration metric: took 414.125µs to acquireMachinesLock for "newest-cni-603000"
	I0419 12:49:53.199586   10930 start.go:93] Provisioning new machine with config: &{Name:newest-cni-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0419 12:49:53.199845   10930 start.go:125] createHost starting for "" (driver="qemu2")
	I0419 12:49:53.204754   10930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0419 12:49:53.253907   10930 start.go:159] libmachine.API.Create for "newest-cni-603000" (driver="qemu2")
	I0419 12:49:53.253964   10930 client.go:168] LocalClient.Create starting
	I0419 12:49:53.254153   10930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/ca.pem
	I0419 12:49:53.254241   10930 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:53.254266   10930 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:53.254334   10930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18669-6895/.minikube/certs/cert.pem
	I0419 12:49:53.254379   10930 main.go:141] libmachine: Decoding PEM data...
	I0419 12:49:53.254394   10930 main.go:141] libmachine: Parsing certificate...
	I0419 12:49:53.254920   10930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso...
	I0419 12:49:53.389890   10930 main.go:141] libmachine: Creating SSH key...
	I0419 12:49:53.494691   10930 main.go:141] libmachine: Creating Disk image...
	I0419 12:49:53.494698   10930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0419 12:49:53.494868   10930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:53.507601   10930 main.go:141] libmachine: STDOUT: 
	I0419 12:49:53.507625   10930 main.go:141] libmachine: STDERR: 
	I0419 12:49:53.507675   10930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2 +20000M
	I0419 12:49:53.518500   10930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0419 12:49:53.518519   10930 main.go:141] libmachine: STDERR: 
	I0419 12:49:53.518530   10930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:53.518534   10930 main.go:141] libmachine: Starting QEMU VM...
	I0419 12:49:53.518572   10930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:27:7c:3c:2f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:53.520315   10930 main.go:141] libmachine: STDOUT: 
	I0419 12:49:53.520330   10930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:53.520343   10930 client.go:171] duration metric: took 266.378334ms to LocalClient.Create
	I0419 12:49:55.522470   10930 start.go:128] duration metric: took 2.322629042s to createHost
	I0419 12:49:55.522532   10930 start.go:83] releasing machines lock for "newest-cni-603000", held for 2.323114208s
	W0419 12:49:55.522878   10930 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:55.533495   10930 out.go:177] 
	W0419 12:49:55.539516   10930 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:55.539541   10930 out.go:239] * 
	* 
	W0419 12:49:55.542183   10930 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:55.552502   10930 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-603000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000: exit status 7 (71.1225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-125000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-125000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.245835667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-125000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-125000" primary control-plane node in "default-k8s-diff-port-125000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-125000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-125000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:45.615538   10931 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:45.615666   10931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:45.615670   10931 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:45.615672   10931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:45.615803   10931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:45.616710   10931 out.go:298] Setting JSON to false
	I0419 12:49:45.633255   10931 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6556,"bootTime":1713549629,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:45.633323   10931 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:45.644512   10931 out.go:177] * [default-k8s-diff-port-125000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:45.651655   10931 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:45.651759   10931 notify.go:220] Checking for updates...
	I0419 12:49:45.661567   10931 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:45.671603   10931 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:45.677533   10931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:45.684505   10931 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:45.691594   10931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:45.694786   10931 config.go:182] Loaded profile config "default-k8s-diff-port-125000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:45.695046   10931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:45.707588   10931 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:49:45.711463   10931 start.go:297] selected driver: qemu2
	I0419 12:49:45.711468   10931 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-125000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-125000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:45.711531   10931 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:45.713749   10931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0419 12:49:45.713788   10931 cni.go:84] Creating CNI manager for ""
	I0419 12:49:45.713795   10931 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:45.713823   10931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-125000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-125000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:45.718275   10931 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:45.727596   10931 out.go:177] * Starting "default-k8s-diff-port-125000" primary control-plane node in "default-k8s-diff-port-125000" cluster
	I0419 12:49:45.731553   10931 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:45.731574   10931 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:45.731587   10931 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:45.731668   10931 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:45.731674   10931 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:49:45.731747   10931 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/default-k8s-diff-port-125000/config.json ...
	I0419 12:49:45.732111   10931 start.go:360] acquireMachinesLock for default-k8s-diff-port-125000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:45.732153   10931 start.go:364] duration metric: took 33.375µs to acquireMachinesLock for "default-k8s-diff-port-125000"
	I0419 12:49:45.732164   10931 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:45.732170   10931 fix.go:54] fixHost starting: 
	I0419 12:49:45.732294   10931 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125000: state=Stopped err=<nil>
	W0419 12:49:45.732306   10931 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:45.747584   10931 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-125000" ...
	I0419 12:49:45.755578   10931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:cc:0b:db:df:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:45.757806   10931 main.go:141] libmachine: STDOUT: 
	I0419 12:49:45.757827   10931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:45.757863   10931 fix.go:56] duration metric: took 25.69325ms for fixHost
	I0419 12:49:45.757868   10931 start.go:83] releasing machines lock for "default-k8s-diff-port-125000", held for 25.711ms
	W0419 12:49:45.757874   10931 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:45.757912   10931 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:45.757917   10931 start.go:728] Will try again in 5 seconds ...
	I0419 12:49:50.759998   10931 start.go:360] acquireMachinesLock for default-k8s-diff-port-125000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:50.760564   10931 start.go:364] duration metric: took 429.75µs to acquireMachinesLock for "default-k8s-diff-port-125000"
	I0419 12:49:50.760694   10931 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:50.760721   10931 fix.go:54] fixHost starting: 
	I0419 12:49:50.761482   10931 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125000: state=Stopped err=<nil>
	W0419 12:49:50.761511   10931 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:50.777754   10931 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-125000" ...
	I0419 12:49:50.782125   10931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:cc:0b:db:df:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/default-k8s-diff-port-125000/disk.qcow2
	I0419 12:49:50.791574   10931 main.go:141] libmachine: STDOUT: 
	I0419 12:49:50.791634   10931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:50.791716   10931 fix.go:56] duration metric: took 30.998916ms for fixHost
	I0419 12:49:50.791738   10931 start.go:83] releasing machines lock for "default-k8s-diff-port-125000", held for 31.151208ms
	W0419 12:49:50.791903   10931 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-125000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-125000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:50.798884   10931 out.go:177] 
	W0419 12:49:50.802961   10931 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:50.802983   10931 out.go:239] * 
	* 
	W0419 12:49:50.805535   10931 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:49:50.813907   10931 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-125000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (68.371792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-125000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (33.294792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-125000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-125000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.270791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-125000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-125000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (31.0765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-125000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (31.0935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-125000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-125000 --alsologtostderr -v=1: exit status 83 (42.262333ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-125000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-125000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:51.091364   10959 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:51.091555   10959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:51.091558   10959 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:51.091560   10959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:51.091691   10959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:51.091915   10959 out.go:298] Setting JSON to false
	I0419 12:49:51.091922   10959 mustload.go:65] Loading cluster: default-k8s-diff-port-125000
	I0419 12:49:51.092099   10959 config.go:182] Loaded profile config "default-k8s-diff-port-125000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:51.096782   10959 out.go:177] * The control-plane node default-k8s-diff-port-125000 host is not running: state=Stopped
	I0419 12:49:51.099775   10959 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-125000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-125000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (30.763166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (31.27525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-603000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-603000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.188441583s)

                                                
                                                
-- stdout --
	* [newest-cni-603000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-603000" primary control-plane node in "newest-cni-603000" cluster
	* Restarting existing qemu2 VM for "newest-cni-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:49:59.523427   11014 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:49:59.523572   11014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:59.523576   11014 out.go:304] Setting ErrFile to fd 2...
	I0419 12:49:59.523578   11014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:49:59.523723   11014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:49:59.524684   11014 out.go:298] Setting JSON to false
	I0419 12:49:59.540760   11014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6570,"bootTime":1713549629,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:49:59.540834   11014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:49:59.545080   11014 out.go:177] * [newest-cni-603000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:49:59.552178   11014 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:49:59.556072   11014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:49:59.552270   11014 notify.go:220] Checking for updates...
	I0419 12:49:59.560108   11014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:49:59.563116   11014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:49:59.566085   11014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:49:59.569136   11014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:49:59.572472   11014 config.go:182] Loaded profile config "newest-cni-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:49:59.572744   11014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:49:59.577122   11014 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:49:59.584149   11014 start.go:297] selected driver: qemu2
	I0419 12:49:59.584156   11014 start.go:901] validating driver "qemu2" against &{Name:newest-cni-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:newest-cni-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:59.584208   11014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:49:59.586544   11014 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0419 12:49:59.586579   11014 cni.go:84] Creating CNI manager for ""
	I0419 12:49:59.586586   11014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:49:59.586621   11014 start.go:340] cluster config:
	{Name:newest-cni-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-603000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:49:59.590919   11014 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:49:59.599152   11014 out.go:177] * Starting "newest-cni-603000" primary control-plane node in "newest-cni-603000" cluster
	I0419 12:49:59.603107   11014 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:49:59.603121   11014 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:49:59.603128   11014 cache.go:56] Caching tarball of preloaded images
	I0419 12:49:59.603181   11014 preload.go:173] Found /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0419 12:49:59.603186   11014 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:49:59.603257   11014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/newest-cni-603000/config.json ...
	I0419 12:49:59.603743   11014 start.go:360] acquireMachinesLock for newest-cni-603000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:49:59.603769   11014 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "newest-cni-603000"
	I0419 12:49:59.603779   11014 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:49:59.603784   11014 fix.go:54] fixHost starting: 
	I0419 12:49:59.603899   11014 fix.go:112] recreateIfNeeded on newest-cni-603000: state=Stopped err=<nil>
	W0419 12:49:59.603907   11014 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:49:59.607166   11014 out.go:177] * Restarting existing qemu2 VM for "newest-cni-603000" ...
	I0419 12:49:59.614142   11014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:27:7c:3c:2f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:49:59.616193   11014 main.go:141] libmachine: STDOUT: 
	I0419 12:49:59.616212   11014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:49:59.616240   11014 fix.go:56] duration metric: took 12.45575ms for fixHost
	I0419 12:49:59.616246   11014 start.go:83] releasing machines lock for "newest-cni-603000", held for 12.472208ms
	W0419 12:49:59.616251   11014 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:49:59.616284   11014 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:49:59.616299   11014 start.go:728] Will try again in 5 seconds ...
	I0419 12:50:04.618510   11014 start.go:360] acquireMachinesLock for newest-cni-603000: {Name:mk9183c375b8bff1224231ff39fbf7ac08bf8604 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0419 12:50:04.618906   11014 start.go:364] duration metric: took 247.583µs to acquireMachinesLock for "newest-cni-603000"
	I0419 12:50:04.619017   11014 start.go:96] Skipping create...Using existing machine configuration
	I0419 12:50:04.619035   11014 fix.go:54] fixHost starting: 
	I0419 12:50:04.619750   11014 fix.go:112] recreateIfNeeded on newest-cni-603000: state=Stopped err=<nil>
	W0419 12:50:04.619776   11014 fix.go:138] unexpected machine state, will restart: <nil>
	I0419 12:50:04.630255   11014 out.go:177] * Restarting existing qemu2 VM for "newest-cni-603000" ...
	I0419 12:50:04.634367   11014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:27:7c:3c:2f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18669-6895/.minikube/machines/newest-cni-603000/disk.qcow2
	I0419 12:50:04.644148   11014 main.go:141] libmachine: STDOUT: 
	I0419 12:50:04.644213   11014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0419 12:50:04.644278   11014 fix.go:56] duration metric: took 25.240375ms for fixHost
	I0419 12:50:04.644298   11014 start.go:83] releasing machines lock for "newest-cni-603000", held for 25.373167ms
	W0419 12:50:04.644544   11014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0419 12:50:04.653255   11014 out.go:177] 
	W0419 12:50:04.657290   11014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0419 12:50:04.657335   11014 out.go:239] * 
	* 
	W0419 12:50:04.659946   11014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:50:04.666229   11014 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-603000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000: exit status 7 (70.710209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-603000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000: exit status 7 (32.841792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-603000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-603000 --alsologtostderr -v=1: exit status 83 (42.866583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:50:04.859600   11028 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:50:04.859728   11028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:50:04.859731   11028 out.go:304] Setting ErrFile to fd 2...
	I0419 12:50:04.859733   11028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:50:04.859841   11028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:50:04.860045   11028 out.go:298] Setting JSON to false
	I0419 12:50:04.860053   11028 mustload.go:65] Loading cluster: newest-cni-603000
	I0419 12:50:04.860251   11028 config.go:182] Loaded profile config "newest-cni-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:50:04.863974   11028 out.go:177] * The control-plane node newest-cni-603000 host is not running: state=Stopped
	I0419 12:50:04.866945   11028 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-603000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-603000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000: exit status 7 (32.243792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-603000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000: exit status 7 (32.370542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.0/json-events 27.38
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.09
18 TestDownloadOnly/v1.30.0/DeleteAll 0.23
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.23
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 9.63
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 8.63
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.8
55 TestFunctional/serial/CacheCmd/cache/add_local 1.18
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.2
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.37
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.11
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 1.92
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.33
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.04
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.47
258 TestNoKubernetes/serial/Stop 3.22
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
274 TestStartStop/group/old-k8s-version/serial/Stop 3.03
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 2
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 1.98
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.04
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.1
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.66
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-668000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-668000: exit status 85 (96.342209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |          |
	|         | -p download-only-668000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 12:23:10
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 12:23:10.542052    7306 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:23:10.542208    7306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:10.542212    7306 out.go:304] Setting ErrFile to fd 2...
	I0419 12:23:10.542214    7306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:10.542337    7306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	W0419 12:23:10.542425    7306 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18669-6895/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18669-6895/.minikube/config/config.json: no such file or directory
	I0419 12:23:10.543658    7306 out.go:298] Setting JSON to true
	I0419 12:23:10.561782    7306 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4961,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:23:10.561854    7306 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:23:10.565901    7306 out.go:97] [download-only-668000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:23:10.569930    7306 out.go:169] MINIKUBE_LOCATION=18669
	I0419 12:23:10.566054    7306 notify.go:220] Checking for updates...
	W0419 12:23:10.566086    7306 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball: no such file or directory
	I0419 12:23:10.576813    7306 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:23:10.581060    7306 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:23:10.583945    7306 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:23:10.586976    7306 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	W0419 12:23:10.594364    7306 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 12:23:10.594550    7306 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:23:10.597770    7306 out.go:97] Using the qemu2 driver based on user configuration
	I0419 12:23:10.597789    7306 start.go:297] selected driver: qemu2
	I0419 12:23:10.597804    7306 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:23:10.597916    7306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:23:10.602003    7306 out.go:169] Automatically selected the socket_vmnet network
	I0419 12:23:10.607887    7306 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0419 12:23:10.607978    7306 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 12:23:10.608042    7306 cni.go:84] Creating CNI manager for ""
	I0419 12:23:10.608058    7306 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0419 12:23:10.608113    7306 start.go:340] cluster config:
	{Name:download-only-668000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-668000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:23:10.613782    7306 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:23:10.618118    7306 out.go:97] Downloading VM boot image ...
	I0419 12:23:10.618146    7306 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/iso/arm64/minikube-v1.33.0-1713236417-18649-arm64.iso
	I0419 12:23:18.520189    7306 out.go:97] Starting "download-only-668000" primary control-plane node in "download-only-668000" cluster
	I0419 12:23:18.520221    7306 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:23:18.578847    7306 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:18.578855    7306 cache.go:56] Caching tarball of preloaded images
	I0419 12:23:18.579636    7306 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:23:18.584978    7306 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0419 12:23:18.584984    7306 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:18.665849    7306 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:25.900977    7306 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:25.901148    7306 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:26.598112    7306 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0419 12:23:26.598320    7306 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/download-only-668000/config.json ...
	I0419 12:23:26.598336    7306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/download-only-668000/config.json: {Name:mkc40379667fdfa62985ca9f1f652f71efaabcdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:23:26.598566    7306 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0419 12:23:26.598749    7306 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0419 12:23:27.283349    7306 out.go:169] 
	W0419 12:23:27.288352    7306 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00 0x108a24e00] Decompressors:map[bz2:0x140007e3b50 gz:0x140007e3b58 tar:0x140007e3b00 tar.bz2:0x140007e3b10 tar.gz:0x140007e3b20 tar.xz:0x140007e3b30 tar.zst:0x140007e3b40 tbz2:0x140007e3b10 tgz:0x140007e3b20 txz:0x140007e3b30 tzst:0x140007e3b40 xz:0x140007e3b60 zip:0x140007e3b70 zst:0x140007e3b68] Getters:map[file:0x14002460580 http:0x140006b0370 https:0x140006b04b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0419 12:23:27.288393    7306 out_reason.go:110] 
	W0419 12:23:27.296251    7306 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0419 12:23:27.300237    7306 out.go:169] 
	
	
	* The control-plane node download-only-668000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-668000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-668000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (27.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-907000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-907000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 : (27.382132542s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (27.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-907000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-907000: exit status 85 (88.598625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | -p download-only-668000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| delete  | -p download-only-668000        | download-only-668000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT | 19 Apr 24 12:23 PDT |
	| start   | -o=json --download-only        | download-only-907000 | jenkins | v1.33.0-beta.0 | 19 Apr 24 12:23 PDT |                     |
	|         | -p download-only-907000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/19 12:23:27
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0419 12:23:27.972812    7340 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:23:27.972929    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:27.972932    7340 out.go:304] Setting ErrFile to fd 2...
	I0419 12:23:27.972935    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:23:27.973051    7340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:23:27.974091    7340 out.go:298] Setting JSON to true
	I0419 12:23:27.990366    7340 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4978,"bootTime":1713549629,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:23:27.990432    7340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:23:27.994918    7340 out.go:97] [download-only-907000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:23:27.998841    7340 out.go:169] MINIKUBE_LOCATION=18669
	I0419 12:23:27.995021    7340 notify.go:220] Checking for updates...
	I0419 12:23:28.005907    7340 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:23:28.008853    7340 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:23:28.011833    7340 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:23:28.014864    7340 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	W0419 12:23:28.020793    7340 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0419 12:23:28.020960    7340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:23:28.023863    7340 out.go:97] Using the qemu2 driver based on user configuration
	I0419 12:23:28.023870    7340 start.go:297] selected driver: qemu2
	I0419 12:23:28.023873    7340 start.go:901] validating driver "qemu2" against <nil>
	I0419 12:23:28.023911    7340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0419 12:23:28.026791    7340 out.go:169] Automatically selected the socket_vmnet network
	I0419 12:23:28.032030    7340 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0419 12:23:28.032120    7340 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0419 12:23:28.032149    7340 cni.go:84] Creating CNI manager for ""
	I0419 12:23:28.032157    7340 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0419 12:23:28.032163    7340 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0419 12:23:28.032216    7340 start.go:340] cluster config:
	{Name:download-only-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:23:28.036290    7340 iso.go:125] acquiring lock: {Name:mkd5241fbb2da943101f6316c0b178dec7936458 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0419 12:23:28.038866    7340 out.go:97] Starting "download-only-907000" primary control-plane node in "download-only-907000" cluster
	I0419 12:23:28.038874    7340 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:23:28.094713    7340 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:28.094729    7340 cache.go:56] Caching tarball of preloaded images
	I0419 12:23:28.094886    7340 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:23:28.097998    7340 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0419 12:23:28.098010    7340 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:28.176489    7340 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0419 12:23:39.657834    7340 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:39.657998    7340 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0419 12:23:40.200365    7340 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0419 12:23:40.200569    7340 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/download-only-907000/config.json ...
	I0419 12:23:40.200591    7340 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18669-6895/.minikube/profiles/download-only-907000/config.json: {Name:mk1cd93d70af12234e438d2ce98c50682f985353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0419 12:23:40.201667    7340 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0419 12:23:40.201818    7340 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18669-6895/.minikube/cache/darwin/arm64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-907000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-907000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-907000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-839000 --alsologtostderr --binary-mirror http://127.0.0.1:50990 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-839000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-839000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-040000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-040000: exit status 85 (57.528ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-040000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-040000: exit status 85 (61.324417ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.63s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status: exit status 7 (32.841708ms)

                                                
                                                
-- stdout --
	nospam-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status: exit status 7 (32.146167ms)

                                                
                                                
-- stdout --
	nospam-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status: exit status 7 (32.429541ms)

                                                
                                                
-- stdout --
	nospam-449000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause: exit status 83 (40.10675ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-449000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause: exit status 83 (41.405833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-449000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause: exit status 83 (42.809917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-449000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause: exit status 83 (42.72825ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-449000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause: exit status 83 (40.845584ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-449000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause: exit status 83 (42.853ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-449000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-449000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (8.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 stop: (3.403519542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 stop: (3.21016625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-449000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-449000 stop: (2.009921625s)
--- PASS: TestErrorSpam/stop (8.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18669-6895/.minikube/files/etc/test/nested/copy/7304/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2005719945/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cache add minikube-local-cache-test:functional-663000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 cache delete minikube-local-cache-test:functional-663000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-663000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 config get cpus: exit status 14 (34.081625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 config get cpus: exit status 14 (32.617542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-663000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-663000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (161.322791ms)

                                                
                                                
-- stdout --
	* [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:25:34.358220    7959 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:25:34.358415    7959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:34.358429    7959 out.go:304] Setting ErrFile to fd 2...
	I0419 12:25:34.358433    7959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:34.358766    7959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:25:34.360337    7959 out.go:298] Setting JSON to false
	I0419 12:25:34.379283    7959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5105,"bootTime":1713549629,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:25:34.379350    7959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:25:34.384359    7959 out.go:177] * [functional-663000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	I0419 12:25:34.390284    7959 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:25:34.390337    7959 notify.go:220] Checking for updates...
	I0419 12:25:34.395355    7959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:25:34.398235    7959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:25:34.401270    7959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:25:34.404310    7959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:25:34.407242    7959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:25:34.410637    7959 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:25:34.410905    7959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:25:34.415241    7959 out.go:177] * Using the qemu2 driver based on existing profile
	I0419 12:25:34.422278    7959 start.go:297] selected driver: qemu2
	I0419 12:25:34.422288    7959 start.go:901] validating driver "qemu2" against &{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:25:34.422348    7959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:25:34.429214    7959 out.go:177] 
	W0419 12:25:34.433238    7959 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0419 12:25:34.437269    7959 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-663000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-663000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-663000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.987ms)

                                                
                                                
-- stdout --
	* [functional-663000] minikube v1.33.0-beta.0 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0419 12:25:34.587788    7970 out.go:291] Setting OutFile to fd 1 ...
	I0419 12:25:34.587932    7970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:34.587935    7970 out.go:304] Setting ErrFile to fd 2...
	I0419 12:25:34.587937    7970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0419 12:25:34.588070    7970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18669-6895/.minikube/bin
	I0419 12:25:34.589577    7970 out.go:298] Setting JSON to false
	I0419 12:25:34.606488    7970 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5105,"bootTime":1713549629,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0419 12:25:34.606564    7970 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0419 12:25:34.611343    7970 out.go:177] * [functional-663000] minikube v1.33.0-beta.0 sur Darwin 14.4.1 (arm64)
	I0419 12:25:34.618242    7970 out.go:177]   - MINIKUBE_LOCATION=18669
	I0419 12:25:34.622257    7970 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	I0419 12:25:34.618300    7970 notify.go:220] Checking for updates...
	I0419 12:25:34.628202    7970 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0419 12:25:34.631269    7970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0419 12:25:34.634263    7970 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	I0419 12:25:34.637299    7970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0419 12:25:34.640596    7970 config.go:182] Loaded profile config "functional-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0419 12:25:34.640840    7970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0419 12:25:34.645210    7970 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0419 12:25:34.652277    7970 start.go:297] selected driver: qemu2
	I0419 12:25:34.652282    7970 start.go:901] validating driver "qemu2" against &{Name:functional-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0419 12:25:34.652330    7970 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0419 12:25:34.658246    7970 out.go:177] 
	W0419 12:25:34.662325    7970 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0419 12:25:34.666128    7970 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.32996s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-663000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image rm gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-663000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 image save --daemon gcr.io/google-containers/addon-resizer:functional-663000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-663000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "72.573292ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.0905ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "70.460458ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.666583ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013237917s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-663000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-663000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-663000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-663000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-112000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-112000 --output=json --user=testUser: (1.919162583s)
--- PASS: TestJSONOutput/stop/Command (1.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-746000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-746000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.864667ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ba5eb3b-aa33-4ee9-9ddc-e4bcfbb6243d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-746000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"088479b9-c25e-4879-ade6-c8479f7271d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18669"}}
	{"specversion":"1.0","id":"b5722617-2925-4217-9870-eebe5a3cb927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig"}}
	{"specversion":"1.0","id":"59549eb6-895f-455b-aeea-7a9a05851749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7c0fb46d-50a1-4905-9f2f-2e207d8c0ad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0949edc8-d8e7-413a-9c0c-97f086c9ce9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube"}}
	{"specversion":"1.0","id":"877b880d-3be5-4b45-843d-e2e3a6e03c27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8d065324-3050-48e8-854c-b5b05cb64a88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-746000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-746000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-537000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.662875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-537000] minikube v1.33.0-beta.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18669
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18669-6895/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18669-6895/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-537000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-537000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.762167ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-537000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.753489833s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.712773875s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-537000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-537000: (3.219430291s)
--- PASS: TestNoKubernetes/serial/Stop (3.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-537000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-537000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.897542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-537000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-084000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-084000 --alsologtostderr -v=3: (3.028395208s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-860000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-084000 -n old-k8s-version-084000: exit status 7 (57.407583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-084000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-289000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-289000 --alsologtostderr -v=3: (1.995694667s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-289000 -n no-preload-289000: exit status 7 (54.555416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-289000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-918000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-918000 --alsologtostderr -v=3: (1.979685833s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-918000 -n embed-certs-918000: exit status 7 (57.183875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-918000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-125000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-125000 --alsologtostderr -v=3: (3.037693208s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-125000 -n default-k8s-diff-port-125000: exit status 7 (34.067333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-125000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-603000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-603000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-603000 --alsologtostderr -v=3: (3.658919917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-603000 -n newest-cni-603000: exit status 7 (67.488583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-603000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2078263208/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713554700183532000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2078263208/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713554700183532000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2078263208/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713554700183532000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2078263208/001/test-1713554700183532000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (52.457458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.320667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.626833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.711041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.125375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.242ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.637875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo umount -f /mount-9p": exit status 83 (47.706166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2078263208/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port538559174/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (64.629916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.650792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.402042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.068542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.372292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.672042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.175125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "sudo umount -f /mount-9p": exit status 83 (44.657333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-663000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port538559174/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684381883/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684381883/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684381883/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (81.194ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (88.787375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (88.061041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (85.802875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (87.038ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (88.0555ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-663000 ssh "findmnt -T" /mount1: exit status 83 (95.411917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-663000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684381883/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684381883/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-663000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3684381883/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.08s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-342000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-342000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-342000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-342000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-342000"

                                                
                                                
----------------------- debugLogs end: cilium-342000 [took: 2.270248291s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-342000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-342000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-103000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-103000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard