Test Report: QEMU_macOS 16597

                    
                      f978965594d8c309a3fb9a5e198e88f65b92a95d:2023-05-30:29495
                    
                

Test fail (140/236)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.17
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.85
22 TestAddons/Setup 10.21
23 TestCertOptions 9.93
24 TestCertExpiration 195.16
25 TestDockerFlags 10.16
26 TestForceSystemdFlag 12.11
27 TestForceSystemdEnv 9.94
32 TestErrorSpam/setup 9.66
41 TestFunctional/serial/StartWithProxy 9.8
43 TestFunctional/serial/SoftStart 5.25
44 TestFunctional/serial/KubeContext 0.06
45 TestFunctional/serial/KubectlGetPods 0.06
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
53 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
55 TestFunctional/serial/MinikubeKubectlCmd 0.49
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.59
57 TestFunctional/serial/ExtraConfig 5.25
58 TestFunctional/serial/ComponentHealth 0.06
59 TestFunctional/serial/LogsCmd 0.08
60 TestFunctional/serial/LogsFileCmd 0.07
63 TestFunctional/parallel/DashboardCmd 0.2
66 TestFunctional/parallel/StatusCmd 0.12
70 TestFunctional/parallel/ServiceCmdConnect 0.13
72 TestFunctional/parallel/PersistentVolumeClaim 0.03
74 TestFunctional/parallel/SSHCmd 0.12
75 TestFunctional/parallel/CpCmd 0.17
77 TestFunctional/parallel/FileSync 0.07
78 TestFunctional/parallel/CertSync 0.28
82 TestFunctional/parallel/NodeLabels 0.06
84 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
88 TestFunctional/parallel/Version/components 0.04
89 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
90 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
91 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
92 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
93 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
95 TestFunctional/parallel/DockerEnv/bash 0.05
96 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
97 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
98 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
99 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
100 TestFunctional/parallel/ServiceCmd/List 0.05
101 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
102 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
103 TestFunctional/parallel/ServiceCmd/Format 0.05
104 TestFunctional/parallel/ServiceCmd/URL 0.04
106 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.06
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 66.92
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.5
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.68
134 TestImageBuild/serial/Setup 9.94
136 TestIngressAddonLegacy/StartLegacyK8sCluster 25.37
138 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 0.12
140 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.03
143 TestJSONOutput/start/Command 9.67
149 TestJSONOutput/pause/Command 0.08
155 TestJSONOutput/unpause/Command 0.05
172 TestMinikubeProfile 10.33
175 TestMountStart/serial/StartWithMountFirst 10.16
178 TestMultiNode/serial/FreshStart2Nodes 9.78
179 TestMultiNode/serial/DeployApp2Nodes 92.89
180 TestMultiNode/serial/PingHostFrom2Pods 0.08
181 TestMultiNode/serial/AddNode 0.07
182 TestMultiNode/serial/ProfileList 0.11
183 TestMultiNode/serial/CopyFile 0.06
184 TestMultiNode/serial/StopNode 0.13
185 TestMultiNode/serial/StartAfterStop 0.11
186 TestMultiNode/serial/RestartKeepsNodes 5.39
187 TestMultiNode/serial/DeleteNode 0.1
188 TestMultiNode/serial/StopMultiNode 0.15
189 TestMultiNode/serial/RestartMultiNode 5.24
190 TestMultiNode/serial/ValidateNameConflict 20.2
194 TestPreload 10
196 TestScheduledStopUnix 9.95
197 TestSkaffold 14.53
200 TestRunningBinaryUpgrade 167.87
202 TestKubernetesUpgrade 15.41
215 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.2
216 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.09
217 TestStoppedBinaryUpgrade/Setup 141.29
219 TestPause/serial/Start 9.89
229 TestNoKubernetes/serial/StartWithK8s 9.75
230 TestNoKubernetes/serial/StartWithStopK8s 5.37
231 TestNoKubernetes/serial/Start 5.36
235 TestNoKubernetes/serial/StartNoArgs 5.36
237 TestNetworkPlugins/group/auto/Start 9.76
238 TestNetworkPlugins/group/calico/Start 9.76
239 TestNetworkPlugins/group/custom-flannel/Start 9.74
240 TestNetworkPlugins/group/false/Start 9.76
241 TestNetworkPlugins/group/kindnet/Start 9.63
242 TestNetworkPlugins/group/flannel/Start 9.83
243 TestNetworkPlugins/group/enable-default-cni/Start 9.76
244 TestNetworkPlugins/group/bridge/Start 9.75
245 TestNetworkPlugins/group/kubenet/Start 9.82
246 TestStoppedBinaryUpgrade/Upgrade 1.83
247 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
249 TestStartStop/group/old-k8s-version/serial/FirstStart 11.14
251 TestStartStop/group/no-preload/serial/FirstStart 9.95
252 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
253 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
256 TestStartStop/group/old-k8s-version/serial/SecondStart 6.97
257 TestStartStop/group/no-preload/serial/DeployApp 0.09
258 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
261 TestStartStop/group/no-preload/serial/SecondStart 5.19
262 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
263 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
264 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
265 TestStartStop/group/old-k8s-version/serial/Pause 0.09
266 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
267 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
268 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
269 TestStartStop/group/no-preload/serial/Pause 0.11
271 TestStartStop/group/embed-certs/serial/FirstStart 9.83
273 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.39
274 TestStartStop/group/embed-certs/serial/DeployApp 0.1
275 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
278 TestStartStop/group/embed-certs/serial/SecondStart 7.04
279 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.08
280 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
283 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.19
284 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
285 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
286 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
287 TestStartStop/group/embed-certs/serial/Pause 0.1
288 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
289 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
290 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
291 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
293 TestStartStop/group/newest-cni/serial/FirstStart 9.88
298 TestStartStop/group/newest-cni/serial/SecondStart 5.24
301 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
302 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (23.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (23.16709675s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b8426f7c-c6ef-479e-b8a3-436b97c06675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-063000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b912a4d-bb8c-4863-9485-c341572f444b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16597"}}
	{"specversion":"1.0","id":"5b18c8c1-49f9-470a-8913-c71a97eb2246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig"}}
	{"specversion":"1.0","id":"af4b969b-1c7f-40f7-b7f1-67db7bd096cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bdb91978-504f-4557-a25a-614f54d0eb40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55642bfc-3514-40c3-a36e-be7bc25905f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube"}}
	{"specversion":"1.0","id":"5837539d-ac01-4378-865f-7e2a229a6b0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"07ae802c-f78c-4d27-8cad-b553977ac4fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"eaa57262-bcd5-478d-87b1-8953297eeadb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3e8c00a8-4210-4715-b366-35211119fbd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c80c2218-47c2-42fd-9a55-02a8e6e3ae9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-063000 in cluster download-only-063000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"75776fb3-e898-46d2-9ed6-fe90b43f3c73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffd168af-8b89-4b0a-8ba9-07dd68f0d9af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107992378 0x107992378 0x107992378 0x107992378 0x107992378 0x107992378 0x107992378] Decompressors:map[bz2:0x14000590918 gz:0x14000590970 tar:0x14000590920 tar.bz2:0x14000590930 tar.gz:0x14000590940 tar.xz:0x14000590950 tar.zst:0x14000590960 tbz2:0x14000590930 tgz:0x140005
90940 txz:0x14000590950 tzst:0x14000590960 xz:0x14000590978 zip:0x14000590980 zst:0x14000590990] Getters:map[file:0x14001188580 http:0x140009eea00 https:0x140009eea50] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4194fa55-97f3-460b-8c5a-e35febb14fe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:04:49.753461    6595 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:04:49.753570    6595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:04:49.753573    6595 out.go:309] Setting ErrFile to fd 2...
	I0530 13:04:49.753576    6595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:04:49.753642    6595 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	W0530 13:04:49.753707    6595 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16597-6175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16597-6175/.minikube/config/config.json: no such file or directory
	I0530 13:04:49.754911    6595 out.go:303] Setting JSON to true
	I0530 13:04:49.772777    6595 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3860,"bootTime":1685473229,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:04:49.772847    6595 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:04:49.777785    6595 out.go:97] [download-only-063000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:04:49.781950    6595 out.go:169] MINIKUBE_LOCATION=16597
	I0530 13:04:49.777924    6595 notify.go:220] Checking for updates...
	W0530 13:04:49.777932    6595 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball: no such file or directory
	I0530 13:04:49.788542    6595 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:04:49.796928    6595 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:04:49.799812    6595 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:04:49.802954    6595 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	W0530 13:04:49.807228    6595 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0530 13:04:49.807396    6595 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:04:49.810865    6595 out.go:97] Using the qemu2 driver based on user configuration
	I0530 13:04:49.810881    6595 start.go:295] selected driver: qemu2
	I0530 13:04:49.810894    6595 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:04:49.810947    6595 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:04:49.813931    6595 out.go:169] Automatically selected the socket_vmnet network
	I0530 13:04:49.818949    6595 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0530 13:04:49.819033    6595 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0530 13:04:49.819062    6595 cni.go:84] Creating CNI manager for ""
	I0530 13:04:49.819090    6595 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0530 13:04:49.819095    6595 start_flags.go:319] config:
	{Name:download-only-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-063000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:04:49.819285    6595 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:04:49.823839    6595 out.go:97] Downloading VM boot image ...
	I0530 13:04:49.823879    6595 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso
	I0530 13:05:01.434780    6595 out.go:97] Starting control plane node download-only-063000 in cluster download-only-063000
	I0530 13:05:01.434811    6595 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:05:01.494208    6595 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:05:01.494283    6595 cache.go:57] Caching tarball of preloaded images
	I0530 13:05:01.495274    6595 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:05:01.499483    6595 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0530 13:05:01.499489    6595 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:01.628983    6595 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:05:11.758015    6595 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:11.758168    6595 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:12.402860    6595 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0530 13:05:12.403053    6595 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/download-only-063000/config.json ...
	I0530 13:05:12.403072    6595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/download-only-063000/config.json: {Name:mk75755e468446e335bbf12293cdade13be013e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:05:12.403317    6595 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:05:12.404305    6595 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0530 13:05:12.846683    6595 out.go:169] 
	W0530 13:05:12.850350    6595 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107992378 0x107992378 0x107992378 0x107992378 0x107992378 0x107992378 0x107992378] Decompressors:map[bz2:0x14000590918 gz:0x14000590970 tar:0x14000590920 tar.bz2:0x14000590930 tar.gz:0x14000590940 tar.xz:0x14000590950 tar.zst:0x14000590960 tbz2:0x14000590930 tgz:0x14000590940 txz:0x14000590950 tzst:0x14000590960 xz:0x14000590978 zip:0x14000590980 zst:0x14000590990] Getters:map[file:0x14001188580 http:0x140009eea00 https:0x140009eea50] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0530 13:05:12.850384    6595 out_reason.go:110] 
	W0530 13:05:12.858282    6595 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:05:12.862221    6595 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-063000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (23.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-784000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-784000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.705943041s)

                                                
                                                
-- stdout --
	* [offline-docker-784000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-784000 in cluster offline-docker-784000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-784000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:12:22.124708    7750 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:12:22.124814    7750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:22.124816    7750 out.go:309] Setting ErrFile to fd 2...
	I0530 13:12:22.124818    7750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:22.124889    7750 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:12:22.126145    7750 out.go:303] Setting JSON to false
	I0530 13:12:22.142703    7750 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4313,"bootTime":1685473229,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:12:22.142783    7750 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:12:22.146853    7750 out.go:177] * [offline-docker-784000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:12:22.153849    7750 notify.go:220] Checking for updates...
	I0530 13:12:22.157811    7750 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:12:22.160869    7750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:12:22.163861    7750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:12:22.166940    7750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:12:22.169850    7750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:12:22.172936    7750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:12:22.176173    7750 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:12:22.176199    7750 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:12:22.179860    7750 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:12:22.186825    7750 start.go:295] selected driver: qemu2
	I0530 13:12:22.186834    7750 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:12:22.186842    7750 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:12:22.188721    7750 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:12:22.191845    7750 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:12:22.193128    7750 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:12:22.193144    7750 cni.go:84] Creating CNI manager for ""
	I0530 13:12:22.193151    7750 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:12:22.193155    7750 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:12:22.193160    7750 start_flags.go:319] config:
	{Name:offline-docker-784000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-784000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:12:22.193235    7750 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:12:22.201824    7750 out.go:177] * Starting control plane node offline-docker-784000 in cluster offline-docker-784000
	I0530 13:12:22.205841    7750 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:12:22.205892    7750 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:12:22.205908    7750 cache.go:57] Caching tarball of preloaded images
	I0530 13:12:22.205974    7750 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:12:22.205978    7750 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:12:22.206036    7750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/offline-docker-784000/config.json ...
	I0530 13:12:22.206050    7750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/offline-docker-784000/config.json: {Name:mkd4da7364706bcd52bff6cb8abea60553020b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:12:22.206236    7750 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:12:22.206250    7750 start.go:364] acquiring machines lock for offline-docker-784000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:22.206275    7750 start.go:368] acquired machines lock for "offline-docker-784000" in 20.708µs
	I0530 13:12:22.206288    7750 start.go:93] Provisioning new machine with config: &{Name:offline-docker-784000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-784000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:22.206324    7750 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:22.214840    7750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:22.229186    7750 start.go:159] libmachine.API.Create for "offline-docker-784000" (driver="qemu2")
	I0530 13:12:22.229217    7750 client.go:168] LocalClient.Create starting
	I0530 13:12:22.229284    7750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:22.229308    7750 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:22.229327    7750 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:22.229383    7750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:22.229399    7750 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:22.229405    7750 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:22.229756    7750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:22.346025    7750 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:22.390123    7750 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:22.390137    7750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:22.390315    7750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2
	I0530 13:12:22.399261    7750 main.go:141] libmachine: STDOUT: 
	I0530 13:12:22.399276    7750 main.go:141] libmachine: STDERR: 
	I0530 13:12:22.399338    7750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2 +20000M
	I0530 13:12:22.410035    7750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:22.410057    7750 main.go:141] libmachine: STDERR: 
	I0530 13:12:22.410078    7750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2
	I0530 13:12:22.410087    7750 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:22.410126    7750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:5e:ae:16:63:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2
	I0530 13:12:22.411683    7750 main.go:141] libmachine: STDOUT: 
	I0530 13:12:22.411698    7750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:22.411716    7750 client.go:171] LocalClient.Create took 182.498917ms
	I0530 13:12:24.412412    7750 start.go:128] duration metric: createHost completed in 2.206135375s
	I0530 13:12:24.412433    7750 start.go:83] releasing machines lock for "offline-docker-784000", held for 2.206208833s
	W0530 13:12:24.412445    7750 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:24.420629    7750 out.go:177] * Deleting "offline-docker-784000" in qemu2 ...
	W0530 13:12:24.428489    7750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:24.428505    7750 start.go:702] Will try again in 5 seconds ...
	I0530 13:12:29.430607    7750 start.go:364] acquiring machines lock for offline-docker-784000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:29.431025    7750 start.go:368] acquired machines lock for "offline-docker-784000" in 318.417µs
	I0530 13:12:29.431150    7750 start.go:93] Provisioning new machine with config: &{Name:offline-docker-784000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-784000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:29.431404    7750 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:29.439108    7750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:29.486921    7750 start.go:159] libmachine.API.Create for "offline-docker-784000" (driver="qemu2")
	I0530 13:12:29.486966    7750 client.go:168] LocalClient.Create starting
	I0530 13:12:29.487092    7750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:29.487128    7750 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:29.487151    7750 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:29.487227    7750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:29.487255    7750 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:29.487272    7750 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:29.487791    7750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:29.615044    7750 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:29.747978    7750 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:29.747988    7750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:29.748136    7750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2
	I0530 13:12:29.756949    7750 main.go:141] libmachine: STDOUT: 
	I0530 13:12:29.756962    7750 main.go:141] libmachine: STDERR: 
	I0530 13:12:29.757032    7750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2 +20000M
	I0530 13:12:29.764297    7750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:29.764312    7750 main.go:141] libmachine: STDERR: 
	I0530 13:12:29.764327    7750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2
	I0530 13:12:29.764337    7750 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:29.764388    7750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:98:58:c0:34:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/offline-docker-784000/disk.qcow2
	I0530 13:12:29.765923    7750 main.go:141] libmachine: STDOUT: 
	I0530 13:12:29.765938    7750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:29.765950    7750 client.go:171] LocalClient.Create took 278.986083ms
	I0530 13:12:31.767973    7750 start.go:128] duration metric: createHost completed in 2.33660875s
	I0530 13:12:31.768002    7750 start.go:83] releasing machines lock for "offline-docker-784000", held for 2.337014958s
	W0530 13:12:31.768153    7750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-784000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-784000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:31.776449    7750 out.go:177] 
	W0530 13:12:31.779444    7750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:12:31.779449    7750 out.go:239] * 
	* 
	W0530 13:12:31.779880    7750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:12:31.787498    7750 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-784000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-05-30 13:12:31.800104 -0700 PDT m=+462.140240542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-784000 -n offline-docker-784000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-784000 -n offline-docker-784000: exit status 7 (30.846458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-784000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-784000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-784000
--- FAIL: TestOffline (9.85s)

                                                
                                    
x
+
TestAddons/Setup (10.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-827000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-827000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.202886125s)

                                                
                                                
-- stdout --
	* [addons-827000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-827000 in cluster addons-827000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-827000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:05:26.630153    6660 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:05:26.630278    6660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:05:26.630282    6660 out.go:309] Setting ErrFile to fd 2...
	I0530 13:05:26.630285    6660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:05:26.630353    6660 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:05:26.631443    6660 out.go:303] Setting JSON to false
	I0530 13:05:26.646554    6660 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3897,"bootTime":1685473229,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:05:26.646609    6660 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:05:26.651859    6660 out.go:177] * [addons-827000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:05:26.658796    6660 notify.go:220] Checking for updates...
	I0530 13:05:26.660741    6660 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:05:26.663853    6660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:05:26.666858    6660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:05:26.669894    6660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:05:26.672844    6660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:05:26.675828    6660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:05:26.678970    6660 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:05:26.682826    6660 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:05:26.689801    6660 start.go:295] selected driver: qemu2
	I0530 13:05:26.689807    6660 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:05:26.689813    6660 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:05:26.691623    6660 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:05:26.694776    6660 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:05:26.697986    6660 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:05:26.698002    6660 cni.go:84] Creating CNI manager for ""
	I0530 13:05:26.698009    6660 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:05:26.698013    6660 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:05:26.698021    6660 start_flags.go:319] config:
	{Name:addons-827000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-827000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:05:26.698089    6660 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:05:26.705750    6660 out.go:177] * Starting control plane node addons-827000 in cluster addons-827000
	I0530 13:05:26.709812    6660 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:05:26.709835    6660 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:05:26.709847    6660 cache.go:57] Caching tarball of preloaded images
	I0530 13:05:26.709923    6660 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:05:26.709928    6660 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:05:26.710131    6660 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/addons-827000/config.json ...
	I0530 13:05:26.710146    6660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/addons-827000/config.json: {Name:mk62356156e2e68c3b2605cf3ad0efb7ef1ee294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:05:26.710358    6660 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:05:26.710399    6660 start.go:364] acquiring machines lock for addons-827000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:05:26.710483    6660 start.go:368] acquired machines lock for "addons-827000" in 79.125µs
	I0530 13:05:26.710496    6660 start.go:93] Provisioning new machine with config: &{Name:addons-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-827000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:05:26.710524    6660 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:05:26.718874    6660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0530 13:05:26.736171    6660 start.go:159] libmachine.API.Create for "addons-827000" (driver="qemu2")
	I0530 13:05:26.736191    6660 client.go:168] LocalClient.Create starting
	I0530 13:05:26.736342    6660 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:05:26.831287    6660 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:05:26.973242    6660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:05:27.162824    6660 main.go:141] libmachine: Creating SSH key...
	I0530 13:05:27.280787    6660 main.go:141] libmachine: Creating Disk image...
	I0530 13:05:27.280793    6660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:05:27.280948    6660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2
	I0530 13:05:27.289970    6660 main.go:141] libmachine: STDOUT: 
	I0530 13:05:27.289991    6660 main.go:141] libmachine: STDERR: 
	I0530 13:05:27.290049    6660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2 +20000M
	I0530 13:05:27.297342    6660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:05:27.297365    6660 main.go:141] libmachine: STDERR: 
	I0530 13:05:27.297382    6660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2
	I0530 13:05:27.297399    6660 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:05:27.297437    6660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:84:a7:e9:a3:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2
	I0530 13:05:27.298995    6660 main.go:141] libmachine: STDOUT: 
	I0530 13:05:27.299008    6660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:05:27.299028    6660 client.go:171] LocalClient.Create took 562.846459ms
	I0530 13:05:29.301158    6660 start.go:128] duration metric: createHost completed in 2.590679625s
	I0530 13:05:29.301227    6660 start.go:83] releasing machines lock for "addons-827000", held for 2.590797958s
	W0530 13:05:29.301302    6660 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:05:29.312980    6660 out.go:177] * Deleting "addons-827000" in qemu2 ...
	W0530 13:05:29.334699    6660 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:05:29.334728    6660 start.go:702] Will try again in 5 seconds ...
	I0530 13:05:34.336996    6660 start.go:364] acquiring machines lock for addons-827000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:05:34.337573    6660 start.go:368] acquired machines lock for "addons-827000" in 466.375µs
	I0530 13:05:34.337706    6660 start.go:93] Provisioning new machine with config: &{Name:addons-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-827000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:05:34.338007    6660 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:05:34.345124    6660 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0530 13:05:34.394573    6660 start.go:159] libmachine.API.Create for "addons-827000" (driver="qemu2")
	I0530 13:05:34.394609    6660 client.go:168] LocalClient.Create starting
	I0530 13:05:34.394742    6660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:05:34.394795    6660 main.go:141] libmachine: Decoding PEM data...
	I0530 13:05:34.394818    6660 main.go:141] libmachine: Parsing certificate...
	I0530 13:05:34.394919    6660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:05:34.394955    6660 main.go:141] libmachine: Decoding PEM data...
	I0530 13:05:34.394971    6660 main.go:141] libmachine: Parsing certificate...
	I0530 13:05:34.395531    6660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:05:34.527134    6660 main.go:141] libmachine: Creating SSH key...
	I0530 13:05:34.743600    6660 main.go:141] libmachine: Creating Disk image...
	I0530 13:05:34.743615    6660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:05:34.743815    6660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2
	I0530 13:05:34.752986    6660 main.go:141] libmachine: STDOUT: 
	I0530 13:05:34.753006    6660 main.go:141] libmachine: STDERR: 
	I0530 13:05:34.753065    6660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2 +20000M
	I0530 13:05:34.760264    6660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:05:34.760279    6660 main.go:141] libmachine: STDERR: 
	I0530 13:05:34.760294    6660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2
	I0530 13:05:34.760302    6660 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:05:34.760344    6660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:45:a7:ca:6f:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/addons-827000/disk.qcow2
	I0530 13:05:34.761873    6660 main.go:141] libmachine: STDOUT: 
	I0530 13:05:34.761887    6660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:05:34.761900    6660 client.go:171] LocalClient.Create took 367.296ms
	I0530 13:05:36.764021    6660 start.go:128] duration metric: createHost completed in 2.426051666s
	I0530 13:05:36.764077    6660 start.go:83] releasing machines lock for "addons-827000", held for 2.426541833s
	W0530 13:05:36.764597    6660 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:05:36.776186    6660 out.go:177] 
	W0530 13:05:36.781366    6660 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:05:36.781392    6660 out.go:239] * 
	* 
	W0530 13:05:36.783919    6660 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:05:36.796087    6660 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-827000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.21s)

                                                
                                    
x
+
TestCertOptions (9.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-937000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-937000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.644496833s)

                                                
                                                
-- stdout --
	* [cert-options-937000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-937000 in cluster cert-options-937000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-937000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-937000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-937000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (83.670958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-937000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-937000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-937000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --\n** stderr ** \n\tW0530 13:13:01.794650    8009 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig\n\n** /stderr **"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-937000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-937000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.959666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-937000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-937000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-937000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-05-30 13:13:01.834649 -0700 PDT m=+492.175527001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-937000 -n cert-options-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-937000 -n cert-options-937000: exit status 7 (28.69125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-937000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-937000
--- FAIL: TestCertOptions (9.93s)

                                                
                                    
x
+
TestCertExpiration (195.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-126000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-126000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.764079583s)

                                                
                                                
-- stdout --
	* [cert-expiration-126000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-126000 in cluster cert-expiration-126000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-126000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-126000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-126000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-126000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222541417s)

                                                
                                                
-- stdout --
	* [cert-expiration-126000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-126000 in cluster cert-expiration-126000
	* Restarting existing qemu2 VM for "cert-expiration-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-126000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-126000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-126000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-126000 in cluster cert-expiration-126000
	* Restarting existing qemu2 VM for "cert-expiration-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-126000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-05-30 13:16:01.884372 -0700 PDT m=+672.221172584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-126000 -n cert-expiration-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-126000 -n cert-expiration-126000: exit status 7 (66.977334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-126000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-126000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-126000
--- FAIL: TestCertExpiration (195.16s)

                                                
                                    
x
+
TestDockerFlags (10.16s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-697000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-697000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.901673125s)

                                                
                                                
-- stdout --
	* [docker-flags-697000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-697000 in cluster docker-flags-697000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:12:41.905849    7943 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:12:41.905975    7943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:41.905979    7943 out.go:309] Setting ErrFile to fd 2...
	I0530 13:12:41.905982    7943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:41.906047    7943 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:12:41.907081    7943 out.go:303] Setting JSON to false
	I0530 13:12:41.922278    7943 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4332,"bootTime":1685473229,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:12:41.922354    7943 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:12:41.926873    7943 out.go:177] * [docker-flags-697000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:12:41.933732    7943 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:12:41.933761    7943 notify.go:220] Checking for updates...
	I0530 13:12:41.940695    7943 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:12:41.943766    7943 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:12:41.946802    7943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:12:41.949718    7943 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:12:41.952734    7943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:12:41.956071    7943 config.go:182] Loaded profile config "force-systemd-flag-160000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:12:41.956138    7943 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:12:41.956159    7943 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:12:41.959638    7943 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:12:41.966704    7943 start.go:295] selected driver: qemu2
	I0530 13:12:41.966709    7943 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:12:41.966715    7943 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:12:41.968697    7943 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:12:41.982790    7943 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:12:41.985857    7943 start_flags.go:910] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0530 13:12:41.985885    7943 cni.go:84] Creating CNI manager for ""
	I0530 13:12:41.985893    7943 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:12:41.985898    7943 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:12:41.985903    7943 start_flags.go:319] config:
	{Name:docker-flags-697000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-697000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP:}
	I0530 13:12:41.985980    7943 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:12:41.994710    7943 out.go:177] * Starting control plane node docker-flags-697000 in cluster docker-flags-697000
	I0530 13:12:41.998714    7943 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:12:41.998736    7943 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:12:41.998748    7943 cache.go:57] Caching tarball of preloaded images
	I0530 13:12:41.998812    7943 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:12:41.998817    7943 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:12:41.998876    7943 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/docker-flags-697000/config.json ...
	I0530 13:12:41.998888    7943 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/docker-flags-697000/config.json: {Name:mkf742df46f982977f01fe6076dfd2c287d016a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:12:41.999098    7943 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:12:41.999115    7943 start.go:364] acquiring machines lock for docker-flags-697000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:41.999147    7943 start.go:368] acquired machines lock for "docker-flags-697000" in 26µs
	I0530 13:12:41.999162    7943 start.go:93] Provisioning new machine with config: &{Name:docker-flags-697000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-697000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:41.999186    7943 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:42.007767    7943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:42.025180    7943 start.go:159] libmachine.API.Create for "docker-flags-697000" (driver="qemu2")
	I0530 13:12:42.025208    7943 client.go:168] LocalClient.Create starting
	I0530 13:12:42.025271    7943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:42.025294    7943 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:42.025306    7943 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:42.025349    7943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:42.025365    7943 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:42.025372    7943 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:42.025714    7943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:42.140853    7943 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:42.335012    7943 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:42.335019    7943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:42.335190    7943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2
	I0530 13:12:42.344300    7943 main.go:141] libmachine: STDOUT: 
	I0530 13:12:42.344319    7943 main.go:141] libmachine: STDERR: 
	I0530 13:12:42.344381    7943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2 +20000M
	I0530 13:12:42.351631    7943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:42.351643    7943 main.go:141] libmachine: STDERR: 
	I0530 13:12:42.351659    7943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2
	I0530 13:12:42.351666    7943 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:42.351704    7943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:72:32:4b:2a:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2
	I0530 13:12:42.353262    7943 main.go:141] libmachine: STDOUT: 
	I0530 13:12:42.353274    7943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:42.353299    7943 client.go:171] LocalClient.Create took 328.094ms
	I0530 13:12:44.355402    7943 start.go:128] duration metric: createHost completed in 2.356256125s
	I0530 13:12:44.355459    7943 start.go:83] releasing machines lock for "docker-flags-697000", held for 2.356361459s
	W0530 13:12:44.355515    7943 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:44.369900    7943 out.go:177] * Deleting "docker-flags-697000" in qemu2 ...
	W0530 13:12:44.386974    7943 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:44.387004    7943 start.go:702] Will try again in 5 seconds ...
	I0530 13:12:49.389050    7943 start.go:364] acquiring machines lock for docker-flags-697000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:49.389523    7943 start.go:368] acquired machines lock for "docker-flags-697000" in 394.583µs
	I0530 13:12:49.389639    7943 start.go:93] Provisioning new machine with config: &{Name:docker-flags-697000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-697000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:49.389907    7943 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:49.399491    7943 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:49.447839    7943 start.go:159] libmachine.API.Create for "docker-flags-697000" (driver="qemu2")
	I0530 13:12:49.447897    7943 client.go:168] LocalClient.Create starting
	I0530 13:12:49.448001    7943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:49.448037    7943 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:49.448055    7943 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:49.448124    7943 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:49.448152    7943 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:49.448169    7943 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:49.448852    7943 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:49.577318    7943 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:49.720229    7943 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:49.720238    7943 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:49.720395    7943 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2
	I0530 13:12:49.728875    7943 main.go:141] libmachine: STDOUT: 
	I0530 13:12:49.728885    7943 main.go:141] libmachine: STDERR: 
	I0530 13:12:49.728933    7943 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2 +20000M
	I0530 13:12:49.736001    7943 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:49.736011    7943 main.go:141] libmachine: STDERR: 
	I0530 13:12:49.736022    7943 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2
	I0530 13:12:49.736029    7943 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:49.736072    7943 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c5:a0:e5:69:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/docker-flags-697000/disk.qcow2
	I0530 13:12:49.737534    7943 main.go:141] libmachine: STDOUT: 
	I0530 13:12:49.737544    7943 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:49.737556    7943 client.go:171] LocalClient.Create took 289.660167ms
	I0530 13:12:51.739665    7943 start.go:128] duration metric: createHost completed in 2.349789708s
	I0530 13:12:51.739774    7943 start.go:83] releasing machines lock for "docker-flags-697000", held for 2.350238375s
	W0530 13:12:51.740347    7943 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:51.749954    7943 out.go:177] 
	W0530 13:12:51.754144    7943 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:12:51.754173    7943 out.go:239] * 
	* 
	W0530 13:12:51.756556    7943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:12:51.767013    7943 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-697000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-697000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-697000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (79.775ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-697000"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-697000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-697000\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-697000\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-697000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-697000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.3175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-697000"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-697000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:67: expected "out/minikube-darwin-arm64 -p docker-flags-697000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-697000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-05-30 13:12:51.905939 -0700 PDT m=+482.246572209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-697000 -n docker-flags-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-697000 -n docker-flags-697000: exit status 7 (28.023708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-697000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-697000
--- FAIL: TestDockerFlags (10.16s)

                                                
                                    
x
+
TestForceSystemdFlag (12.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-160000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-160000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.891514209s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-160000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-160000 in cluster force-systemd-flag-160000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:12:34.803866    7922 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:12:34.803990    7922 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:34.803993    7922 out.go:309] Setting ErrFile to fd 2...
	I0530 13:12:34.803996    7922 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:34.804077    7922 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:12:34.805132    7922 out.go:303] Setting JSON to false
	I0530 13:12:34.820095    7922 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4325,"bootTime":1685473229,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:12:34.820158    7922 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:12:34.826133    7922 out.go:177] * [force-systemd-flag-160000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:12:34.833131    7922 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:12:34.833178    7922 notify.go:220] Checking for updates...
	I0530 13:12:34.841007    7922 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:12:34.844343    7922 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:12:34.847136    7922 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:12:34.849990    7922 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:12:34.853032    7922 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:12:34.856393    7922 config.go:182] Loaded profile config "force-systemd-env-370000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:12:34.856461    7922 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:12:34.856481    7922 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:12:34.861029    7922 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:12:34.868051    7922 start.go:295] selected driver: qemu2
	I0530 13:12:34.868056    7922 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:12:34.868061    7922 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:12:34.869988    7922 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:12:34.873029    7922 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:12:34.876119    7922 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0530 13:12:34.876132    7922 cni.go:84] Creating CNI manager for ""
	I0530 13:12:34.876139    7922 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:12:34.876143    7922 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:12:34.876149    7922 start_flags.go:319] config:
	{Name:force-systemd-flag-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-160000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:12:34.876221    7922 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:12:34.883975    7922 out.go:177] * Starting control plane node force-systemd-flag-160000 in cluster force-systemd-flag-160000
	I0530 13:12:34.887071    7922 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:12:34.887115    7922 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:12:34.887133    7922 cache.go:57] Caching tarball of preloaded images
	I0530 13:12:34.887211    7922 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:12:34.887223    7922 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:12:34.887280    7922 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/force-systemd-flag-160000/config.json ...
	I0530 13:12:34.887306    7922 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/force-systemd-flag-160000/config.json: {Name:mk2c970f0abb717c4b9fb5872c0e0ec8a6b5e475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:12:34.887511    7922 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:12:34.887528    7922 start.go:364] acquiring machines lock for force-systemd-flag-160000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:34.887559    7922 start.go:368] acquired machines lock for "force-systemd-flag-160000" in 26.334µs
	I0530 13:12:34.887573    7922 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-160000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:34.887607    7922 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:34.894976    7922 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:34.912199    7922 start.go:159] libmachine.API.Create for "force-systemd-flag-160000" (driver="qemu2")
	I0530 13:12:34.912236    7922 client.go:168] LocalClient.Create starting
	I0530 13:12:34.912314    7922 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:34.912339    7922 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:34.912350    7922 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:34.912405    7922 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:34.912422    7922 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:34.912434    7922 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:34.912828    7922 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:35.030847    7922 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:35.094968    7922 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:35.094974    7922 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:35.095118    7922 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2
	I0530 13:12:35.103701    7922 main.go:141] libmachine: STDOUT: 
	I0530 13:12:35.103721    7922 main.go:141] libmachine: STDERR: 
	I0530 13:12:35.103795    7922 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2 +20000M
	I0530 13:12:35.111047    7922 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:35.111060    7922 main.go:141] libmachine: STDERR: 
	I0530 13:12:35.111079    7922 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2
	I0530 13:12:35.111087    7922 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:35.111118    7922 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:1f:f2:e3:81:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2
	I0530 13:12:35.112651    7922 main.go:141] libmachine: STDOUT: 
	I0530 13:12:35.112665    7922 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:35.112688    7922 client.go:171] LocalClient.Create took 200.451959ms
	I0530 13:12:37.114868    7922 start.go:128] duration metric: createHost completed in 2.227274709s
	I0530 13:12:37.114941    7922 start.go:83] releasing machines lock for "force-systemd-flag-160000", held for 2.227427542s
	W0530 13:12:37.114992    7922 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:37.122420    7922 out.go:177] * Deleting "force-systemd-flag-160000" in qemu2 ...
	W0530 13:12:37.146493    7922 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:37.146518    7922 start.go:702] Will try again in 5 seconds ...
	I0530 13:12:42.148487    7922 start.go:364] acquiring machines lock for force-systemd-flag-160000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:44.355626    7922 start.go:368] acquired machines lock for "force-systemd-flag-160000" in 2.2071485s
	I0530 13:12:44.355770    7922 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-160000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:44.356112    7922 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:44.361876    7922 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:44.409734    7922 start.go:159] libmachine.API.Create for "force-systemd-flag-160000" (driver="qemu2")
	I0530 13:12:44.409790    7922 client.go:168] LocalClient.Create starting
	I0530 13:12:44.409957    7922 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:44.410012    7922 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:44.410038    7922 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:44.410118    7922 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:44.410149    7922 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:44.410163    7922 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:44.410699    7922 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:44.537980    7922 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:44.605347    7922 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:44.605353    7922 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:44.605496    7922 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2
	I0530 13:12:44.614220    7922 main.go:141] libmachine: STDOUT: 
	I0530 13:12:44.614236    7922 main.go:141] libmachine: STDERR: 
	I0530 13:12:44.614287    7922 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2 +20000M
	I0530 13:12:44.621419    7922 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:44.621434    7922 main.go:141] libmachine: STDERR: 
	I0530 13:12:44.621449    7922 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2
	I0530 13:12:44.621455    7922 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:44.621493    7922 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:16:8e:f0:b8:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-flag-160000/disk.qcow2
	I0530 13:12:44.623017    7922 main.go:141] libmachine: STDOUT: 
	I0530 13:12:44.623029    7922 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:44.623041    7922 client.go:171] LocalClient.Create took 213.249042ms
	I0530 13:12:46.625189    7922 start.go:128] duration metric: createHost completed in 2.26909025s
	I0530 13:12:46.625278    7922 start.go:83] releasing machines lock for "force-systemd-flag-160000", held for 2.26967225s
	W0530 13:12:46.626019    7922 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:46.631799    7922 out.go:177] 
	W0530 13:12:46.644907    7922 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:12:46.644951    7922 out.go:239] * 
	* 
	W0530 13:12:46.647523    7922 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:12:46.656694    7922 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-160000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-160000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-160000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.011583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-160000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-160000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2023-05-30 13:12:46.752335 -0700 PDT m=+477.092840959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-160000 -n force-systemd-flag-160000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-160000 -n force-systemd-flag-160000: exit status 7 (34.205708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-160000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-160000
--- FAIL: TestForceSystemdFlag (12.11s)

                                                
                                    
x
+
TestForceSystemdEnv (9.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-370000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-370000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.716698584s)

                                                
                                                
-- stdout --
	* [force-systemd-env-370000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-370000 in cluster force-systemd-env-370000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-370000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:12:31.974125    7904 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:12:31.974280    7904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:31.974283    7904 out.go:309] Setting ErrFile to fd 2...
	I0530 13:12:31.974289    7904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:12:31.974369    7904 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:12:31.975661    7904 out.go:303] Setting JSON to false
	I0530 13:12:31.993236    7904 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4322,"bootTime":1685473229,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:12:31.993304    7904 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:12:31.998527    7904 out.go:177] * [force-systemd-env-370000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:12:32.005500    7904 notify.go:220] Checking for updates...
	I0530 13:12:32.009440    7904 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:12:32.012501    7904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:12:32.016908    7904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:12:32.019540    7904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:12:32.022448    7904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:12:32.025471    7904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0530 13:12:32.028762    7904 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:12:32.028786    7904 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:12:32.033323    7904 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:12:32.040462    7904 start.go:295] selected driver: qemu2
	I0530 13:12:32.040471    7904 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:12:32.040478    7904 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:12:32.042570    7904 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:12:32.046425    7904 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:12:32.049568    7904 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0530 13:12:32.049584    7904 cni.go:84] Creating CNI manager for ""
	I0530 13:12:32.049592    7904 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:12:32.049600    7904 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:12:32.049604    7904 start_flags.go:319] config:
	{Name:force-systemd-env-370000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-370000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:12:32.049695    7904 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:12:32.057413    7904 out.go:177] * Starting control plane node force-systemd-env-370000 in cluster force-systemd-env-370000
	I0530 13:12:32.061437    7904 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:12:32.061481    7904 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:12:32.061495    7904 cache.go:57] Caching tarball of preloaded images
	I0530 13:12:32.061571    7904 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:12:32.061577    7904 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:12:32.061634    7904 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/force-systemd-env-370000/config.json ...
	I0530 13:12:32.061646    7904 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/force-systemd-env-370000/config.json: {Name:mka5e4e6afc01074a8e3f6d5a36dfadd18e38ab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:12:32.061851    7904 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:12:32.061868    7904 start.go:364] acquiring machines lock for force-systemd-env-370000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:32.061893    7904 start.go:368] acquired machines lock for "force-systemd-env-370000" in 21.542µs
	I0530 13:12:32.061907    7904 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-370000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-370000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:32.061934    7904 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:32.071439    7904 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:32.087421    7904 start.go:159] libmachine.API.Create for "force-systemd-env-370000" (driver="qemu2")
	I0530 13:12:32.087447    7904 client.go:168] LocalClient.Create starting
	I0530 13:12:32.087530    7904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:32.087558    7904 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:32.087574    7904 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:32.087624    7904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:32.087638    7904 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:32.087648    7904 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:32.088040    7904 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:32.212116    7904 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:32.251850    7904 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:32.251857    7904 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:32.252028    7904 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2
	I0530 13:12:32.260974    7904 main.go:141] libmachine: STDOUT: 
	I0530 13:12:32.260991    7904 main.go:141] libmachine: STDERR: 
	I0530 13:12:32.261059    7904 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2 +20000M
	I0530 13:12:32.282454    7904 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:32.282484    7904 main.go:141] libmachine: STDERR: 
	I0530 13:12:32.282505    7904 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2
	I0530 13:12:32.282510    7904 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:32.282546    7904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:76:e9:b0:17:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2
	I0530 13:12:32.284322    7904 main.go:141] libmachine: STDOUT: 
	I0530 13:12:32.284335    7904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:32.284354    7904 client.go:171] LocalClient.Create took 196.90825ms
	I0530 13:12:34.286516    7904 start.go:128] duration metric: createHost completed in 2.224613167s
	I0530 13:12:34.286574    7904 start.go:83] releasing machines lock for "force-systemd-env-370000", held for 2.224725875s
	W0530 13:12:34.286630    7904 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:34.297020    7904 out.go:177] * Deleting "force-systemd-env-370000" in qemu2 ...
	W0530 13:12:34.317228    7904 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:34.317309    7904 start.go:702] Will try again in 5 seconds ...
	I0530 13:12:39.319444    7904 start.go:364] acquiring machines lock for force-systemd-env-370000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:12:39.319933    7904 start.go:368] acquired machines lock for "force-systemd-env-370000" in 373.917µs
	I0530 13:12:39.320452    7904 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-370000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-370000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:12:39.320742    7904 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:12:39.331641    7904 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0530 13:12:39.378738    7904 start.go:159] libmachine.API.Create for "force-systemd-env-370000" (driver="qemu2")
	I0530 13:12:39.378776    7904 client.go:168] LocalClient.Create starting
	I0530 13:12:39.378947    7904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:12:39.379006    7904 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:39.379023    7904 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:39.379109    7904 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:12:39.379140    7904 main.go:141] libmachine: Decoding PEM data...
	I0530 13:12:39.379165    7904 main.go:141] libmachine: Parsing certificate...
	I0530 13:12:39.379689    7904 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:12:39.507019    7904 main.go:141] libmachine: Creating SSH key...
	I0530 13:12:39.598867    7904 main.go:141] libmachine: Creating Disk image...
	I0530 13:12:39.598873    7904 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:12:39.599034    7904 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2
	I0530 13:12:39.607509    7904 main.go:141] libmachine: STDOUT: 
	I0530 13:12:39.607526    7904 main.go:141] libmachine: STDERR: 
	I0530 13:12:39.607595    7904 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2 +20000M
	I0530 13:12:39.614915    7904 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:12:39.614929    7904 main.go:141] libmachine: STDERR: 
	I0530 13:12:39.614949    7904 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2
	I0530 13:12:39.614956    7904 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:12:39.614992    7904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e4:e7:82:ff:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/force-systemd-env-370000/disk.qcow2
	I0530 13:12:39.616538    7904 main.go:141] libmachine: STDOUT: 
	I0530 13:12:39.616552    7904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:12:39.616563    7904 client.go:171] LocalClient.Create took 237.786917ms
	I0530 13:12:41.618695    7904 start.go:128] duration metric: createHost completed in 2.297985875s
	I0530 13:12:41.618750    7904 start.go:83] releasing machines lock for "force-systemd-env-370000", held for 2.298850292s
	W0530 13:12:41.619355    7904 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-370000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-370000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:12:41.628927    7904 out.go:177] 
	W0530 13:12:41.633907    7904 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:12:41.633939    7904 out.go:239] * 
	* 
	W0530 13:12:41.636424    7904 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:12:41.645864    7904 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-370000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-370000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-370000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.631958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-370000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-370000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2023-05-30 13:12:41.739922 -0700 PDT m=+472.080304667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-370000 -n force-systemd-env-370000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-370000 -n force-systemd-env-370000: exit status 7 (32.735667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-370000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-370000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-370000
--- FAIL: TestForceSystemdEnv (9.94s)

                                                
                                    
x
+
TestErrorSpam/setup (9.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-257000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-257000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 --driver=qemu2 : exit status 80 (9.66207275s)

                                                
                                                
-- stdout --
	* [nospam-257000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node nospam-257000 in cluster nospam-257000
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-257000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-257000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-257000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-257000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-257000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
- MINIKUBE_LOCATION=16597
- KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node nospam-257000 in cluster nospam-257000
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-257000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-257000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.66s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-602000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2229: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-602000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.731532s)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node functional-602000 in cluster functional-602000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-602000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50618 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50618 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50618 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2231: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-602000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2236: start stdout=* [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
- MINIKUBE_LOCATION=16597
- KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node functional-602000 in cluster functional-602000
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-602000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2241: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50618 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50618 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50618 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (70.22575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.80s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-602000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-602000 --alsologtostderr -v=8: exit status 80 (5.176947459s)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-602000 in cluster functional-602000
	* Restarting existing qemu2 VM for "functional-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:05:57.430107    6757 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:05:57.430216    6757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:05:57.430219    6757 out.go:309] Setting ErrFile to fd 2...
	I0530 13:05:57.430222    6757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:05:57.430291    6757 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:05:57.431271    6757 out.go:303] Setting JSON to false
	I0530 13:05:57.446456    6757 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3928,"bootTime":1685473229,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:05:57.446518    6757 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:05:57.450886    6757 out.go:177] * [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:05:57.457785    6757 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:05:57.457837    6757 notify.go:220] Checking for updates...
	I0530 13:05:57.464749    6757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:05:57.467796    6757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:05:57.470748    6757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:05:57.473776    6757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:05:57.476678    6757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:05:57.480025    6757 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:05:57.480048    6757 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:05:57.484706    6757 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:05:57.491746    6757 start.go:295] selected driver: qemu2
	I0530 13:05:57.491752    6757 start.go:870] validating driver "qemu2" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:05:57.491820    6757 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:05:57.493710    6757 cni.go:84] Creating CNI manager for ""
	I0530 13:05:57.493725    6757 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:05:57.493731    6757 start_flags.go:319] config:
	{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:05:57.493807    6757 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:05:57.499688    6757 out.go:177] * Starting control plane node functional-602000 in cluster functional-602000
	I0530 13:05:57.503746    6757 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:05:57.503768    6757 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:05:57.503779    6757 cache.go:57] Caching tarball of preloaded images
	I0530 13:05:57.503844    6757 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:05:57.503849    6757 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:05:57.503900    6757 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/functional-602000/config.json ...
	I0530 13:05:57.504193    6757 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:05:57.504205    6757 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:05:57.504258    6757 start.go:368] acquired machines lock for "functional-602000" in 48.375µs
	I0530 13:05:57.504267    6757 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:05:57.504271    6757 fix.go:55] fixHost starting: 
	I0530 13:05:57.504379    6757 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
	W0530 13:05:57.504387    6757 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:05:57.512698    6757 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
	I0530 13:05:57.515730    6757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
	I0530 13:05:57.517503    6757 main.go:141] libmachine: STDOUT: 
	I0530 13:05:57.517520    6757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:05:57.517547    6757 fix.go:57] fixHost completed within 13.27475ms
	I0530 13:05:57.517552    6757 start.go:83] releasing machines lock for "functional-602000", held for 13.290125ms
	W0530 13:05:57.517559    6757 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:05:57.517612    6757 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:05:57.517616    6757 start.go:702] Will try again in 5 seconds ...
	I0530 13:06:02.518071    6757 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:06:02.518445    6757 start.go:368] acquired machines lock for "functional-602000" in 292.667µs
	I0530 13:06:02.518581    6757 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:06:02.518601    6757 fix.go:55] fixHost starting: 
	I0530 13:06:02.519291    6757 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
	W0530 13:06:02.519317    6757 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:06:02.528128    6757 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
	I0530 13:06:02.532241    6757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
	I0530 13:06:02.541488    6757 main.go:141] libmachine: STDOUT: 
	I0530 13:06:02.541552    6757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:06:02.541648    6757 fix.go:57] fixHost completed within 23.049ms
	I0530 13:06:02.541669    6757 start.go:83] releasing machines lock for "functional-602000", held for 23.202667ms
	W0530 13:06:02.541988    6757 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:06:02.549948    6757 out.go:177] 
	W0530 13:06:02.554262    6757 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:06:02.554289    6757 out.go:239] * 
	* 
	W0530 13:06:02.557276    6757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:06:02.564153    6757 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:656: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-602000 --alsologtostderr -v=8": exit status 80
functional_test.go:658: soft start took 5.178553958s for "functional-602000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (68.329167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
functional_test.go:676: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.439ms)

                                                
                                                
** stderr ** 
	W0530 13:06:02.674801    6766 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:678: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:682: expected current-context = "functional-602000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (29.265292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-602000 get po -A
functional_test.go:691: (dbg) Non-zero exit: kubectl --context functional-602000 get po -A: exit status 1 (25.499958ms)

                                                
                                                
** stderr ** 
	W0530 13:06:02.729810    6769 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:693: failed to get kubectl pods: args "kubectl --context functional-602000 get po -A" : exit status 1
functional_test.go:697: expected stderr to be empty but got *"W0530 13:06:02.729810    6769 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig\nError in configuration: context was not found for specified context: functional-602000\n"*: args "kubectl --context functional-602000 get po -A"
functional_test.go:700: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-602000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (29.238542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl images
functional_test.go:1119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl images: exit status 89 (42.908792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1121: failed to get images by "out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl images" ssh exit status 89
functional_test.go:1125: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1142: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 89 (40.038958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1145: failed to manually delete image "out/minikube-darwin-arm64 -p functional-602000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 89
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 89 (38.861125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1158: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 89 (37.98725ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1160: expected "out/minikube-darwin-arm64 -p functional-602000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 89
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 kubectl -- --context functional-602000 get pods
functional_test.go:711: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 kubectl -- --context functional-602000 get pods: exit status 1 (455.515ms)

                                                
                                                
** stderr ** 
	W0530 13:06:08.375201    6854 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-602000
	* no server found for cluster "functional-602000"

                                                
                                                
** /stderr **
functional_test.go:714: failed to get pods. args "out/minikube-darwin-arm64 -p functional-602000 kubectl -- --context functional-602000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (31.628459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-602000 get pods
functional_test.go:736: (dbg) Non-zero exit: out/kubectl --context functional-602000 get pods: exit status 1 (563.497208ms)

                                                
                                                
** stderr ** 
	W0530 13:06:08.971164    6859 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-602000
	* no server found for cluster "functional-602000"

                                                
                                                
** /stderr **
functional_test.go:739: failed to run kubectl directly. args "out/kubectl --context functional-602000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (29.00425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.59s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-602000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-602000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.173586458s)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-602000 in cluster functional-602000
	* Restarting existing qemu2 VM for "functional-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-602000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:754: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-602000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:756: restart took 5.17404825s for "functional-602000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (71.837708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-602000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:805: (dbg) Non-zero exit: kubectl --context functional-602000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.946959ms)

                                                
                                                
** stderr ** 
	W0530 13:06:14.277118    6871 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "functional-602000" does not exist

                                                
                                                
** /stderr **
functional_test.go:807: failed to get components. args "kubectl --context functional-602000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (29.2035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 logs
functional_test.go:1231: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 logs: exit status 89 (74.3285ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:04 PDT |                     |
	|         | -p download-only-063000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | -p download-only-063000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	| delete  | -p download-only-063000                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	| delete  | -p download-only-063000                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	| start   | --download-only -p                                                       | binary-mirror-577000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | binary-mirror-577000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50607                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-577000                                                  | binary-mirror-577000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	| start   | -p addons-827000                                                         | addons-827000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | --wait=true --memory=4000                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress                                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-827000                                                         | addons-827000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	| start   | -p nospam-257000 -n=1 --memory=2250 --wait=false                         | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-257000                                                         | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
	| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | minikube-local-cache-test:functional-602000                              |                      |         |         |                     |                     |
	| cache   | functional-602000 cache delete                                           | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | minikube-local-cache-test:functional-602000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	| ssh     | functional-602000 ssh sudo                                               | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-602000                                                        | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-602000 ssh                                                    | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-602000 cache reload                                           | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	| ssh     | functional-602000 ssh                                                    | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-602000 kubectl --                                             | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
	|         | --context functional-602000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 13:06:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 13:06:09.028406    6862 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:06:09.028501    6862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:06:09.028503    6862 out.go:309] Setting ErrFile to fd 2...
	I0530 13:06:09.028505    6862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:06:09.028570    6862 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:06:09.029504    6862 out.go:303] Setting JSON to false
	I0530 13:06:09.044928    6862 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3940,"bootTime":1685473229,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:06:09.044986    6862 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:06:09.050079    6862 out.go:177] * [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:06:09.057129    6862 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:06:09.057164    6862 notify.go:220] Checking for updates...
	I0530 13:06:09.064023    6862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:06:09.067053    6862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:06:09.069967    6862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:06:09.073045    6862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:06:09.076046    6862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:06:09.079257    6862 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:06:09.079279    6862 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:06:09.083974    6862 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:06:09.090968    6862 start.go:295] selected driver: qemu2
	I0530 13:06:09.090973    6862 start.go:870] validating driver "qemu2" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:06:09.091037    6862 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:06:09.092856    6862 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:06:09.092871    6862 cni.go:84] Creating CNI manager for ""
	I0530 13:06:09.092877    6862 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:06:09.092882    6862 start_flags.go:319] config:
	{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:06:09.092953    6862 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:06:09.099897    6862 out.go:177] * Starting control plane node functional-602000 in cluster functional-602000
	I0530 13:06:09.103992    6862 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:06:09.104014    6862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:06:09.104024    6862 cache.go:57] Caching tarball of preloaded images
	I0530 13:06:09.104093    6862 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:06:09.104096    6862 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:06:09.104149    6862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/functional-602000/config.json ...
	I0530 13:06:09.104524    6862 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:06:09.104539    6862 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:06:09.104569    6862 start.go:368] acquired machines lock for "functional-602000" in 26.292µs
	I0530 13:06:09.104577    6862 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:06:09.104579    6862 fix.go:55] fixHost starting: 
	I0530 13:06:09.104694    6862 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
	W0530 13:06:09.104698    6862 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:06:09.111938    6862 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
	I0530 13:06:09.116065    6862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
	I0530 13:06:09.117855    6862 main.go:141] libmachine: STDOUT: 
	I0530 13:06:09.117872    6862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:06:09.117900    6862 fix.go:57] fixHost completed within 13.3205ms
	I0530 13:06:09.117904    6862 start.go:83] releasing machines lock for "functional-602000", held for 13.332708ms
	W0530 13:06:09.117910    6862 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:06:09.117962    6862 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:06:09.117967    6862 start.go:702] Will try again in 5 seconds ...
	I0530 13:06:14.120045    6862 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:06:14.120631    6862 start.go:368] acquired machines lock for "functional-602000" in 414.667µs
	I0530 13:06:14.120841    6862 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:06:14.120854    6862 fix.go:55] fixHost starting: 
	I0530 13:06:14.121628    6862 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
	W0530 13:06:14.121647    6862 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:06:14.125480    6862 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
	I0530 13:06:14.133405    6862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
	I0530 13:06:14.142811    6862 main.go:141] libmachine: STDOUT: 
	I0530 13:06:14.142857    6862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:06:14.142947    6862 fix.go:57] fixHost completed within 22.094208ms
	I0530 13:06:14.142960    6862 start.go:83] releasing machines lock for "functional-602000", held for 22.281875ms
	W0530 13:06:14.143376    6862 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:06:14.149209    6862 out.go:177] 
	W0530 13:06:14.153391    6862 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:06:14.153455    6862 out.go:239] * 
	W0530 13:06:14.156120    6862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:06:14.163301    6862 out.go:177] 
	
	* 
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1233: out/minikube-darwin-arm64 -p functional-602000 logs failed: exit status 89
functional_test.go:1223: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:04 PDT |                     |
|         | -p download-only-063000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | -o=json --download-only                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | -p download-only-063000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.27.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| delete  | -p download-only-063000                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| delete  | -p download-only-063000                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | --download-only -p                                                       | binary-mirror-577000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | binary-mirror-577000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50607                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-577000                                                  | binary-mirror-577000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | -p addons-827000                                                         | addons-827000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --wait=true --memory=4000                                                |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
|         | --addons=ingress                                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-827000                                                         | addons-827000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | -p nospam-257000 -n=1 --memory=2250 --wait=false                         | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-257000                                                         | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | minikube-local-cache-test:functional-602000                              |                      |         |         |                     |                     |
| cache   | functional-602000 cache delete                                           | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | minikube-local-cache-test:functional-602000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
| ssh     | functional-602000 ssh sudo                                               | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-602000                                                        | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-602000 ssh                                                    | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-602000 cache reload                                           | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
| ssh     | functional-602000 ssh                                                    | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-602000 kubectl --                                             | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | --context functional-602000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2023/05/30 13:06:09
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.20.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0530 13:06:09.028406    6862 out.go:296] Setting OutFile to fd 1 ...
I0530 13:06:09.028501    6862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:06:09.028503    6862 out.go:309] Setting ErrFile to fd 2...
I0530 13:06:09.028505    6862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:06:09.028570    6862 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:06:09.029504    6862 out.go:303] Setting JSON to false
I0530 13:06:09.044928    6862 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3940,"bootTime":1685473229,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0530 13:06:09.044986    6862 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0530 13:06:09.050079    6862 out.go:177] * [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
I0530 13:06:09.057129    6862 out.go:177]   - MINIKUBE_LOCATION=16597
I0530 13:06:09.057164    6862 notify.go:220] Checking for updates...
I0530 13:06:09.064023    6862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
I0530 13:06:09.067053    6862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0530 13:06:09.069967    6862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0530 13:06:09.073045    6862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
I0530 13:06:09.076046    6862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0530 13:06:09.079257    6862 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:06:09.079279    6862 driver.go:375] Setting default libvirt URI to qemu:///system
I0530 13:06:09.083974    6862 out.go:177] * Using the qemu2 driver based on existing profile
I0530 13:06:09.090968    6862 start.go:295] selected driver: qemu2
I0530 13:06:09.090973    6862 start.go:870] validating driver "qemu2" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0530 13:06:09.091037    6862 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0530 13:06:09.092856    6862 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0530 13:06:09.092871    6862 cni.go:84] Creating CNI manager for ""
I0530 13:06:09.092877    6862 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0530 13:06:09.092882    6862 start_flags.go:319] config:
{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0530 13:06:09.092953    6862 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0530 13:06:09.099897    6862 out.go:177] * Starting control plane node functional-602000 in cluster functional-602000
I0530 13:06:09.103992    6862 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0530 13:06:09.104014    6862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
I0530 13:06:09.104024    6862 cache.go:57] Caching tarball of preloaded images
I0530 13:06:09.104093    6862 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0530 13:06:09.104096    6862 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
I0530 13:06:09.104149    6862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/functional-602000/config.json ...
I0530 13:06:09.104524    6862 cache.go:195] Successfully downloaded all kic artifacts
I0530 13:06:09.104539    6862 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0530 13:06:09.104569    6862 start.go:368] acquired machines lock for "functional-602000" in 26.292µs
I0530 13:06:09.104577    6862 start.go:96] Skipping create...Using existing machine configuration
I0530 13:06:09.104579    6862 fix.go:55] fixHost starting: 
I0530 13:06:09.104694    6862 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
W0530 13:06:09.104698    6862 fix.go:129] unexpected machine state, will restart: <nil>
I0530 13:06:09.111938    6862 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
I0530 13:06:09.116065    6862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
I0530 13:06:09.117855    6862 main.go:141] libmachine: STDOUT: 
I0530 13:06:09.117872    6862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0530 13:06:09.117900    6862 fix.go:57] fixHost completed within 13.3205ms
I0530 13:06:09.117904    6862 start.go:83] releasing machines lock for "functional-602000", held for 13.332708ms
W0530 13:06:09.117910    6862 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0530 13:06:09.117962    6862 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0530 13:06:09.117967    6862 start.go:702] Will try again in 5 seconds ...
I0530 13:06:14.120045    6862 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0530 13:06:14.120631    6862 start.go:368] acquired machines lock for "functional-602000" in 414.667µs
I0530 13:06:14.120841    6862 start.go:96] Skipping create...Using existing machine configuration
I0530 13:06:14.120854    6862 fix.go:55] fixHost starting: 
I0530 13:06:14.121628    6862 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
W0530 13:06:14.121647    6862 fix.go:129] unexpected machine state, will restart: <nil>
I0530 13:06:14.125480    6862 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
I0530 13:06:14.133405    6862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
I0530 13:06:14.142811    6862 main.go:141] libmachine: STDOUT: 
I0530 13:06:14.142857    6862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0530 13:06:14.142947    6862 fix.go:57] fixHost completed within 22.094208ms
I0530 13:06:14.142960    6862 start.go:83] releasing machines lock for "functional-602000", held for 22.281875ms
W0530 13:06:14.143376    6862 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0530 13:06:14.149209    6862 out.go:177] 
W0530 13:06:14.153391    6862 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0530 13:06:14.153455    6862 out.go:239] * 
W0530 13:06:14.156120    6862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0530 13:06:14.163301    6862 out.go:177] 

                                                
                                                
* 
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3732503168/001/logs.txt
functional_test.go:1223: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:04 PDT |                     |
|         | -p download-only-063000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | -o=json --download-only                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | -p download-only-063000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.27.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| delete  | -p download-only-063000                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| delete  | -p download-only-063000                                                  | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | --download-only -p                                                       | binary-mirror-577000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | binary-mirror-577000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50607                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-577000                                                  | binary-mirror-577000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | -p addons-827000                                                         | addons-827000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --wait=true --memory=4000                                                |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
|         | --addons=ingress                                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-827000                                                         | addons-827000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | -p nospam-257000 -n=1 --memory=2250 --wait=false                         | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-257000 --log_dir                                                  | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-257000                                                         | nospam-257000        | jenkins | v1.30.1 | 30 May 23 13:05 PDT | 30 May 23 13:05 PDT |
| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:05 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-602000 cache add                                              | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | minikube-local-cache-test:functional-602000                              |                      |         |         |                     |                     |
| cache   | functional-602000 cache delete                                           | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | minikube-local-cache-test:functional-602000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
| ssh     | functional-602000 ssh sudo                                               | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-602000                                                        | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-602000 ssh                                                    | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-602000 cache reload                                           | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
| ssh     | functional-602000 ssh                                                    | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.30.1 | 30 May 23 13:06 PDT | 30 May 23 13:06 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-602000 kubectl --                                             | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | --context functional-602000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-602000                                                     | functional-602000    | jenkins | v1.30.1 | 30 May 23 13:06 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2023/05/30 13:06:09
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.20.4 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0530 13:06:09.028406    6862 out.go:296] Setting OutFile to fd 1 ...
I0530 13:06:09.028501    6862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:06:09.028503    6862 out.go:309] Setting ErrFile to fd 2...
I0530 13:06:09.028505    6862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:06:09.028570    6862 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:06:09.029504    6862 out.go:303] Setting JSON to false
I0530 13:06:09.044928    6862 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3940,"bootTime":1685473229,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0530 13:06:09.044986    6862 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0530 13:06:09.050079    6862 out.go:177] * [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
I0530 13:06:09.057129    6862 out.go:177]   - MINIKUBE_LOCATION=16597
I0530 13:06:09.057164    6862 notify.go:220] Checking for updates...
I0530 13:06:09.064023    6862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
I0530 13:06:09.067053    6862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0530 13:06:09.069967    6862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0530 13:06:09.073045    6862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
I0530 13:06:09.076046    6862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0530 13:06:09.079257    6862 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:06:09.079279    6862 driver.go:375] Setting default libvirt URI to qemu:///system
I0530 13:06:09.083974    6862 out.go:177] * Using the qemu2 driver based on existing profile
I0530 13:06:09.090968    6862 start.go:295] selected driver: qemu2
I0530 13:06:09.090973    6862 start.go:870] validating driver "qemu2" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0530 13:06:09.091037    6862 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0530 13:06:09.092856    6862 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0530 13:06:09.092871    6862 cni.go:84] Creating CNI manager for ""
I0530 13:06:09.092877    6862 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0530 13:06:09.092882    6862 start_flags.go:319] config:
{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0530 13:06:09.092953    6862 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0530 13:06:09.099897    6862 out.go:177] * Starting control plane node functional-602000 in cluster functional-602000
I0530 13:06:09.103992    6862 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0530 13:06:09.104014    6862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
I0530 13:06:09.104024    6862 cache.go:57] Caching tarball of preloaded images
I0530 13:06:09.104093    6862 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0530 13:06:09.104096    6862 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
I0530 13:06:09.104149    6862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/functional-602000/config.json ...
I0530 13:06:09.104524    6862 cache.go:195] Successfully downloaded all kic artifacts
I0530 13:06:09.104539    6862 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0530 13:06:09.104569    6862 start.go:368] acquired machines lock for "functional-602000" in 26.292µs
I0530 13:06:09.104577    6862 start.go:96] Skipping create...Using existing machine configuration
I0530 13:06:09.104579    6862 fix.go:55] fixHost starting: 
I0530 13:06:09.104694    6862 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
W0530 13:06:09.104698    6862 fix.go:129] unexpected machine state, will restart: <nil>
I0530 13:06:09.111938    6862 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
I0530 13:06:09.116065    6862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
I0530 13:06:09.117855    6862 main.go:141] libmachine: STDOUT: 
I0530 13:06:09.117872    6862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0530 13:06:09.117900    6862 fix.go:57] fixHost completed within 13.3205ms
I0530 13:06:09.117904    6862 start.go:83] releasing machines lock for "functional-602000", held for 13.332708ms
W0530 13:06:09.117910    6862 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0530 13:06:09.117962    6862 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0530 13:06:09.117967    6862 start.go:702] Will try again in 5 seconds ...
I0530 13:06:14.120045    6862 start.go:364] acquiring machines lock for functional-602000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0530 13:06:14.120631    6862 start.go:368] acquired machines lock for "functional-602000" in 414.667µs
I0530 13:06:14.120841    6862 start.go:96] Skipping create...Using existing machine configuration
I0530 13:06:14.120854    6862 fix.go:55] fixHost starting: 
I0530 13:06:14.121628    6862 fix.go:103] recreateIfNeeded on functional-602000: state=Stopped err=<nil>
W0530 13:06:14.121647    6862 fix.go:129] unexpected machine state, will restart: <nil>
I0530 13:06:14.125480    6862 out.go:177] * Restarting existing qemu2 VM for "functional-602000" ...
I0530 13:06:14.133405    6862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d7:cd:70:27:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/functional-602000/disk.qcow2
I0530 13:06:14.142811    6862 main.go:141] libmachine: STDOUT: 
I0530 13:06:14.142857    6862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0530 13:06:14.142947    6862 fix.go:57] fixHost completed within 22.094208ms
I0530 13:06:14.142960    6862 start.go:83] releasing machines lock for "functional-602000", held for 22.281875ms
W0530 13:06:14.143376    6862 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-602000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0530 13:06:14.149209    6862 out.go:177] 
W0530 13:06:14.153391    6862 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0530 13:06:14.153455    6862 out.go:239] * 
W0530 13:06:14.156120    6862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0530 13:06:14.163301    6862 out.go:177] 

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-602000 --alsologtostderr -v=1]
functional_test.go:913: output didn't produce a URL
functional_test.go:905: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-602000 --alsologtostderr -v=1] ...
functional_test.go:905: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-602000 --alsologtostderr -v=1] stdout:
functional_test.go:905: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-602000 --alsologtostderr -v=1] stderr:
I0530 13:07:04.397941    7162 out.go:296] Setting OutFile to fd 1 ...
I0530 13:07:04.398307    7162 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.398310    7162 out.go:309] Setting ErrFile to fd 2...
I0530 13:07:04.398313    7162 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.398398    7162 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:07:04.398616    7162 mustload.go:65] Loading cluster: functional-602000
I0530 13:07:04.398801    7162 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.403136    7162 out.go:177] * The control plane node must be running for this command
I0530 13:07:04.407294    7162 out.go:177]   To start a cluster, run: "minikube start -p functional-602000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (41.464166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 status
functional_test.go:849: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 status: exit status 7 (29.622542ms)

                                                
                                                
-- stdout --
	functional-602000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:851: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-602000 status" : exit status 7
functional_test.go:855: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:855: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (28.778292ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:857: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-602000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:867: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 status -o json
functional_test.go:867: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 status -o json: exit status 7 (28.510458ms)

                                                
                                                
-- stdout --
	{"Name":"functional-602000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:869: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-602000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (28.69575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-602000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-602000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.983708ms)

                                                
                                                
** stderr ** 
	W0530 13:06:23.699322    7047 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "functional-602000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1631: failed to create hello-node deployment with this command "kubectl --context functional-602000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1596: service test failed - dumping debug information
functional_test.go:1597: -----------------------service failure post-mortem--------------------------------
functional_test.go:1600: (dbg) Run:  kubectl --context functional-602000 describe po hello-node-connect
functional_test.go:1600: (dbg) Non-zero exit: kubectl --context functional-602000 describe po hello-node-connect: exit status 1 (25.867292ms)

                                                
                                                
** stderr ** 
	W0530 13:06:23.725422    7048 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:1602: "kubectl --context functional-602000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1604: hello-node pod describe:
functional_test.go:1606: (dbg) Run:  kubectl --context functional-602000 logs -l app=hello-node-connect
functional_test.go:1606: (dbg) Non-zero exit: kubectl --context functional-602000 logs -l app=hello-node-connect: exit status 1 (26.28525ms)

                                                
                                                
** stderr ** 
	W0530 13:06:23.751809    7049 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:1608: "kubectl --context functional-602000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1610: hello-node logs:
functional_test.go:1612: (dbg) Run:  kubectl --context functional-602000 describe svc hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-602000 describe svc hello-node-connect: exit status 1 (25.734333ms)

                                                
                                                
** stderr ** 
	W0530 13:06:23.777630    7050 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-602000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (29.357625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-602000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (31.461166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "echo hello"
functional_test.go:1723: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "echo hello": exit status 89 (39.994791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1728: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"echo hello\"" : exit status 89
functional_test.go:1732: expected minikube ssh command output to be -"hello"- but got *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n"*. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"echo hello\""
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "cat /etc/hostname"
functional_test.go:1740: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "cat /etc/hostname": exit status 89 (45.899792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1746: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"cat /etc/hostname\"" : exit status 89
functional_test.go:1750: expected minikube ssh command output to be -"functional-602000"- but got *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n"*. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (29.2525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 89 (54.838042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-602000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh -n functional-602000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh -n functional-602000 "sudo cat /home/docker/cp-test.txt": exit status 89 (38.030208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-602000 ssh -n functional-602000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 89
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-602000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cp functional-602000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1513246594/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 cp functional-602000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1513246594/001/cp-test.txt: exit status 89 (38.466417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-602000 cp functional-602000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1513246594/001/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh -n functional-602000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh -n functional-602000 "sudo cat /home/docker/cp-test.txt": exit status 89 (41.055208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-602000 ssh -n functional-602000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 89
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1513246594/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n",
+ 	"",
)
--- FAIL: TestFunctional/parallel/CpCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/6593/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/test/nested/copy/6593/hosts"
functional_test.go:1926: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/test/nested/copy/6593/hosts": exit status 89 (39.045625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1928: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/test/nested/copy/6593/hosts" failed: exit status 89
functional_test.go:1931: file sync test content: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:1941: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-602000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (28.872834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/6593.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/6593.pem"
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/6593.pem": exit status 89 (42.65375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1970: failed to check existence of "/etc/ssl/certs/6593.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo cat /etc/ssl/certs/6593.pem\"": exit status 89
functional_test.go:1976: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6593.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-602000"
	"""
)
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/6593.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /usr/share/ca-certificates/6593.pem"
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /usr/share/ca-certificates/6593.pem": exit status 89 (39.705542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1970: failed to check existence of "/usr/share/ca-certificates/6593.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo cat /usr/share/ca-certificates/6593.pem\"": exit status 89
functional_test.go:1976: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6593.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-602000"
	"""
)
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 89 (38.739083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1970: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 89
functional_test.go:1976: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-602000"
	"""
)
functional_test.go:1994: Checking for existence of /etc/ssl/certs/65932.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/65932.pem"
functional_test.go:1995: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/65932.pem": exit status 89 (45.674041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1997: failed to check existence of "/etc/ssl/certs/65932.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo cat /etc/ssl/certs/65932.pem\"": exit status 89
functional_test.go:2003: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/65932.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-602000"
	"""
)
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/65932.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /usr/share/ca-certificates/65932.pem"
functional_test.go:1995: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /usr/share/ca-certificates/65932.pem": exit status 89 (40.631209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1997: failed to check existence of "/usr/share/ca-certificates/65932.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo cat /usr/share/ca-certificates/65932.pem\"": exit status 89
functional_test.go:2003: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/65932.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-602000"
	"""
)
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1995: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 89 (38.528833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1997: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 89
functional_test.go:2003: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-602000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (28.656375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-602000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:217: (dbg) Non-zero exit: kubectl --context functional-602000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.169916ms)

                                                
                                                
** stderr ** 
	W0530 13:06:15.028768    6916 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:219: failed to 'kubectl get nodes' with args "kubectl --context functional-602000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:225: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W0530 13:06:15.028768    6916 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:225: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W0530 13:06:15.028768    6916 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:225: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W0530 13:06:15.028768    6916 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test.go:225: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W0530 13:06:15.028768    6916 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-602000 -n functional-602000: exit status 7 (28.899084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-602000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo systemctl is-active crio": exit status 89 (37.849834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:2025: output of 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --: exit status 89
functional_test.go:2028: For runtime "docker": expected "crio" to be inactive but got "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 version -o=json --components
functional_test.go:2265: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 version -o=json --components: exit status 89 (39.946667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:2267: error version: exit status 89
functional_test.go:2272: expected to see "buildctl" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "commit" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "containerd" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "crictl" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "crio" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "ctr" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "docker" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "minikubeVersion" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "podman" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:2272: expected to see "crun" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-602000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-602000 image ls --format short --alsologtostderr:
I0530 13:07:04.828130    7179 out.go:296] Setting OutFile to fd 1 ...
I0530 13:07:04.828262    7179 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.828265    7179 out.go:309] Setting ErrFile to fd 2...
I0530 13:07:04.828267    7179 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.828338    7179 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:07:04.828727    7179 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.828784    7179 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
functional_test.go:273: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-602000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-602000 image ls --format table --alsologtostderr:
I0530 13:07:04.896182    7183 out.go:296] Setting OutFile to fd 1 ...
I0530 13:07:04.896321    7183 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.896324    7183 out.go:309] Setting ErrFile to fd 2...
I0530 13:07:04.896327    7183 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.896393    7183 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:07:04.896779    7183 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.896835    7183 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
functional_test.go:273: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-602000 image ls --format json --alsologtostderr:
[]
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-602000 image ls --format json --alsologtostderr:
I0530 13:07:04.862600    7181 out.go:296] Setting OutFile to fd 1 ...
I0530 13:07:04.862725    7181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.862728    7181 out.go:309] Setting ErrFile to fd 2...
I0530 13:07:04.862730    7181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.862799    7181 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:07:04.863188    7181 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.863244    7181 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
functional_test.go:273: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-602000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-602000 image ls --format yaml --alsologtostderr:
I0530 13:07:04.793421    7177 out.go:296] Setting OutFile to fd 1 ...
I0530 13:07:04.793554    7177 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.793557    7177 out.go:309] Setting ErrFile to fd 2...
I0530 13:07:04.793560    7177 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.793630    7177 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:07:04.794015    7177 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.794072    7177 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
functional_test.go:273: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh pgrep buildkitd: exit status 89 (40.725167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image build -t localhost/my-image:functional-602000 testdata/build --alsologtostderr
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-602000 image build -t localhost/my-image:functional-602000 testdata/build --alsologtostderr:
I0530 13:07:04.971015    7187 out.go:296] Setting OutFile to fd 1 ...
I0530 13:07:04.971391    7187 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.971394    7187 out.go:309] Setting ErrFile to fd 2...
I0530 13:07:04.971397    7187 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:07:04.971477    7187 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:07:04.971881    7187 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.972325    7187 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:07:04.972517    7187 build_images.go:123] succeeded building to: 
I0530 13:07:04.972519    7187 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls
functional_test.go:441: expected "localhost/my-image:functional-602000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-602000 docker-env) && out/minikube-darwin-arm64 status -p functional-602000"
functional_test.go:494: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-602000 docker-env) && out/minikube-darwin-arm64 status -p functional-602000": exit status 1 (46.150959ms)
functional_test.go:500: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2: exit status 89 (41.521666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:07:04.670429    7171 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:07:04.670779    7171 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.670782    7171 out.go:309] Setting ErrFile to fd 2...
	I0530 13:07:04.670785    7171 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.670897    7171 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:07:04.671146    7171 mustload.go:65] Loading cluster: functional-602000
	I0530 13:07:04.671346    7171 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:07:04.675954    7171 out.go:177] * The control plane node must be running for this command
	I0530 13:07:04.680055    7171 out.go:177]   To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
** /stderr **
functional_test.go:2116: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2121: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2: exit status 89 (40.358875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:07:04.752744    7175 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:07:04.752866    7175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.752869    7175 out.go:309] Setting ErrFile to fd 2...
	I0530 13:07:04.752871    7175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.752940    7175 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:07:04.753161    7175 mustload.go:65] Loading cluster: functional-602000
	I0530 13:07:04.753327    7175 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:07:04.757982    7175 out.go:177] * The control plane node must be running for this command
	I0530 13:07:04.762138    7175 out.go:177]   To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
** /stderr **
functional_test.go:2116: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2121: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2: exit status 89 (39.759334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:07:04.712344    7173 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:07:04.712482    7173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.712484    7173 out.go:309] Setting ErrFile to fd 2...
	I0530 13:07:04.712487    7173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.712560    7173 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:07:04.712781    7173 mustload.go:65] Loading cluster: functional-602000
	I0530 13:07:04.712972    7173 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:07:04.716151    7173 out.go:177] * The control plane node must be running for this command
	I0530 13:07:04.720062    7173 out.go:177]   To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
** /stderr **
functional_test.go:2116: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-602000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2121: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-602000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1435: (dbg) Non-zero exit: kubectl --context functional-602000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.289042ms)

                                                
                                                
** stderr ** 
	W0530 13:06:15.506672    6951 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "functional-602000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1441: failed to create hello-node deployment with this command "kubectl --context functional-602000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 service list
functional_test.go:1457: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 service list: exit status 89 (45.1515ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1459: failed to do service list. args "out/minikube-darwin-arm64 -p functional-602000 service list" : exit status 89
functional_test.go:1462: expected 'service list' to contain *hello-node* but got -"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 service list -o json
functional_test.go:1487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 service list -o json: exit status 89 (40.75525ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1489: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-602000 service list -o json": exit status 89
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 service --namespace=default --https --url hello-node
functional_test.go:1507: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 service --namespace=default --https --url hello-node: exit status 89 (42.013792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1509: failed to get service url. args "out/minikube-darwin-arm64 -p functional-602000 service --namespace=default --https --url hello-node" : exit status 89
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 service hello-node --url --format={{.IP}}
functional_test.go:1538: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 service hello-node --url --format={{.IP}}: exit status 89 (49.601ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1540: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-602000 service hello-node --url --format={{.IP}}": exit status 89
functional_test.go:1546: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 service hello-node --url
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 service hello-node --url: exit status 89 (40.849417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test.go:1559: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-602000 service hello-node --url": exit status 89
functional_test.go:1563: found endpoint for hello-node: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test.go:1567: failed to parse "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"": parse "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-602000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 89. stderr: I0530 13:06:15.989505    6978 out.go:296] Setting OutFile to fd 1 ...
I0530 13:06:15.989616    6978 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:06:15.989620    6978 out.go:309] Setting ErrFile to fd 2...
I0530 13:06:15.989623    6978 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:06:15.989694    6978 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:06:15.989886    6978 mustload.go:65] Loading cluster: functional-602000
I0530 13:06:15.990066    6978 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:06:15.994969    6978 out.go:177] * The control plane node must be running for this command
I0530 13:06:16.002980    6978 out.go:177]   To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
stdout: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-602000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-602000": client config: context "functional-602000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (66.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-602000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-602000 get svc nginx-svc: exit status 1 (70.488916ms)

                                                
                                                
** stderr ** 
	W0530 13:07:22.949791    7196 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	Error in configuration: context was not found for specified context: functional-602000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-602000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (66.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-darwin-arm64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr: (1.349315583s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-602000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr: (1.352142959s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-602000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.157627708s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr: (1.241870834s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-602000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image save gcr.io/google-containers/addon-resizer:functional-602000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:384: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-602000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035456958s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.68s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-040000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-040000 --driver=qemu2 : exit status 80 (9.867278459s)

                                                
                                                
-- stdout --
	* [image-040000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node image-040000 in cluster image-040000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-040000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-040000 -n image-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-040000 -n image-040000: exit status 7 (68.197542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (25.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-948000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ingress-addon-legacy-948000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (25.371968709s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-948000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node ingress-addon-legacy-948000 in cluster ingress-addon-legacy-948000
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ingress-addon-legacy-948000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:08:36.337137    7252 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:08:36.337295    7252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:08:36.337298    7252 out.go:309] Setting ErrFile to fd 2...
	I0530 13:08:36.337301    7252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:08:36.337360    7252 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:08:36.338391    7252 out.go:303] Setting JSON to false
	I0530 13:08:36.353508    7252 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4087,"bootTime":1685473229,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:08:36.353581    7252 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:08:36.359134    7252 out.go:177] * [ingress-addon-legacy-948000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:08:36.366106    7252 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:08:36.366137    7252 notify.go:220] Checking for updates...
	I0530 13:08:36.373100    7252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:08:36.376042    7252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:08:36.379099    7252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:08:36.382160    7252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:08:36.385060    7252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:08:36.388193    7252 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:08:36.392081    7252 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:08:36.399088    7252 start.go:295] selected driver: qemu2
	I0530 13:08:36.399095    7252 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:08:36.399101    7252 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:08:36.400989    7252 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:08:36.404085    7252 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:08:36.407147    7252 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:08:36.407173    7252 cni.go:84] Creating CNI manager for ""
	I0530 13:08:36.407182    7252 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0530 13:08:36.407186    7252 start_flags.go:319] config:
	{Name:ingress-addon-legacy-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-948000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
:}
	I0530 13:08:36.407273    7252 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:08:36.416055    7252 out.go:177] * Starting control plane node ingress-addon-legacy-948000 in cluster ingress-addon-legacy-948000
	I0530 13:08:36.420063    7252 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0530 13:08:36.531817    7252 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0530 13:08:36.531861    7252 cache.go:57] Caching tarball of preloaded images
	I0530 13:08:36.532188    7252 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0530 13:08:36.536390    7252 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0530 13:08:36.544257    7252 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:08:36.680817    7252 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0530 13:08:51.180132    7252 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:08:51.180282    7252 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:08:51.929906    7252 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0530 13:08:51.930103    7252 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/ingress-addon-legacy-948000/config.json ...
	I0530 13:08:51.930126    7252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/ingress-addon-legacy-948000/config.json: {Name:mkc9569556be788b162caf68dda0aa50debe3f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:08:51.930357    7252 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:08:51.930370    7252 start.go:364] acquiring machines lock for ingress-addon-legacy-948000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:08:51.930400    7252 start.go:368] acquired machines lock for "ingress-addon-legacy-948000" in 25.167µs
	I0530 13:08:51.930413    7252 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-948000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:08:51.930449    7252 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:08:51.937324    7252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0530 13:08:51.951922    7252 start.go:159] libmachine.API.Create for "ingress-addon-legacy-948000" (driver="qemu2")
	I0530 13:08:51.951940    7252 client.go:168] LocalClient.Create starting
	I0530 13:08:51.952011    7252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:08:51.952033    7252 main.go:141] libmachine: Decoding PEM data...
	I0530 13:08:51.952044    7252 main.go:141] libmachine: Parsing certificate...
	I0530 13:08:51.952092    7252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:08:51.952106    7252 main.go:141] libmachine: Decoding PEM data...
	I0530 13:08:51.952115    7252 main.go:141] libmachine: Parsing certificate...
	I0530 13:08:51.952428    7252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:08:52.136423    7252 main.go:141] libmachine: Creating SSH key...
	I0530 13:08:52.198101    7252 main.go:141] libmachine: Creating Disk image...
	I0530 13:08:52.198107    7252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:08:52.198247    7252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2
	I0530 13:08:52.206874    7252 main.go:141] libmachine: STDOUT: 
	I0530 13:08:52.206888    7252 main.go:141] libmachine: STDERR: 
	I0530 13:08:52.206943    7252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2 +20000M
	I0530 13:08:52.214155    7252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:08:52.214175    7252 main.go:141] libmachine: STDERR: 
	I0530 13:08:52.214195    7252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2
	I0530 13:08:52.214199    7252 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:08:52.214241    7252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:ba:59:02:50:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2
	I0530 13:08:52.215784    7252 main.go:141] libmachine: STDOUT: 
	I0530 13:08:52.215799    7252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:08:52.215818    7252 client.go:171] LocalClient.Create took 263.880792ms
	I0530 13:08:54.217971    7252 start.go:128] duration metric: createHost completed in 2.287550291s
	I0530 13:08:54.218057    7252 start.go:83] releasing machines lock for "ingress-addon-legacy-948000", held for 2.287704792s
	W0530 13:08:54.218154    7252 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:08:54.225019    7252 out.go:177] * Deleting "ingress-addon-legacy-948000" in qemu2 ...
	W0530 13:08:54.246116    7252 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:08:54.246148    7252 start.go:702] Will try again in 5 seconds ...
	I0530 13:08:59.248292    7252 start.go:364] acquiring machines lock for ingress-addon-legacy-948000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:08:59.248823    7252 start.go:368] acquired machines lock for "ingress-addon-legacy-948000" in 438.333µs
	I0530 13:08:59.248962    7252 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-948000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-948000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:08:59.249237    7252 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:08:59.262008    7252 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0530 13:08:59.310720    7252 start.go:159] libmachine.API.Create for "ingress-addon-legacy-948000" (driver="qemu2")
	I0530 13:08:59.310754    7252 client.go:168] LocalClient.Create starting
	I0530 13:08:59.310892    7252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:08:59.310939    7252 main.go:141] libmachine: Decoding PEM data...
	I0530 13:08:59.310957    7252 main.go:141] libmachine: Parsing certificate...
	I0530 13:08:59.311044    7252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:08:59.311071    7252 main.go:141] libmachine: Decoding PEM data...
	I0530 13:08:59.311086    7252 main.go:141] libmachine: Parsing certificate...
	I0530 13:08:59.311667    7252 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:08:59.439829    7252 main.go:141] libmachine: Creating SSH key...
	I0530 13:08:59.592615    7252 main.go:141] libmachine: Creating Disk image...
	I0530 13:08:59.592621    7252 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:08:59.592801    7252 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2
	I0530 13:08:59.601972    7252 main.go:141] libmachine: STDOUT: 
	I0530 13:08:59.601998    7252 main.go:141] libmachine: STDERR: 
	I0530 13:08:59.602080    7252 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2 +20000M
	I0530 13:08:59.609521    7252 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:08:59.609536    7252 main.go:141] libmachine: STDERR: 
	I0530 13:08:59.609559    7252 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2
	I0530 13:08:59.609567    7252 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:08:59.609607    7252 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:73:43:24:94:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/ingress-addon-legacy-948000/disk.qcow2
	I0530 13:08:59.611237    7252 main.go:141] libmachine: STDOUT: 
	I0530 13:08:59.611249    7252 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:08:59.611267    7252 client.go:171] LocalClient.Create took 300.516458ms
	I0530 13:09:01.613418    7252 start.go:128] duration metric: createHost completed in 2.3642105s
	I0530 13:09:01.613532    7252 start.go:83] releasing machines lock for "ingress-addon-legacy-948000", held for 2.364716541s
	W0530 13:09:01.614183    7252 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ingress-addon-legacy-948000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ingress-addon-legacy-948000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:09:01.623567    7252 out.go:177] 
	W0530 13:09:01.627896    7252 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:09:01.627942    7252 out.go:239] * 
	* 
	W0530 13:09:01.630545    7252 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:09:01.642700    7252 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-arm64 start -p ingress-addon-legacy-948000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (25.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-948000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ingress-addon-legacy-948000 addons enable ingress --alsologtostderr -v=5: exit status 10 (86.278ms)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:09:01.730496    7267 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:09:01.731078    7267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:09:01.731083    7267 out.go:309] Setting ErrFile to fd 2...
	I0530 13:09:01.731087    7267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:09:01.731195    7267 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:09:01.735671    7267 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0530 13:09:01.739905    7267 config.go:182] Loaded profile config "ingress-addon-legacy-948000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0530 13:09:01.739916    7267 addons.go:66] Setting ingress=true in profile "ingress-addon-legacy-948000"
	I0530 13:09:01.739921    7267 addons.go:228] Setting addon ingress=true in "ingress-addon-legacy-948000"
	I0530 13:09:01.739954    7267 host.go:66] Checking if "ingress-addon-legacy-948000" exists ...
	W0530 13:09:01.740298    7267 host.go:58] "ingress-addon-legacy-948000" host status: Stopped
	W0530 13:09:01.740303    7267 addons.go:274] "ingress-addon-legacy-948000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0530 13:09:01.740311    7267 addons.go:464] Verifying addon ingress=true in "ingress-addon-legacy-948000"
	I0530 13:09:01.744618    7267 out.go:177] * Verifying ingress addon...
	W0530 13:09:01.748706    7267 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:09:01.752628    7267 out.go:177] 
	W0530 13:09:01.756713    7267 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-948000" does not exist: client config: context "ingress-addon-legacy-948000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-948000" does not exist: client config: context "ingress-addon-legacy-948000" does not exist]
	W0530 13:09:01.756721    7267 out.go:239] * 
	* 
	W0530 13:09:01.761000    7267 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:09:01.764631    7267 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-948000 -n ingress-addon-legacy-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-948000 -n ingress-addon-legacy-948000: exit status 7 (35.222875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-948000 -n ingress-addon-legacy-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-948000 -n ingress-addon-legacy-948000: exit status 7 (28.95425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-948000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-487000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-487000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.666949625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73acc0b4-a4a9-448d-8dc7-cf2695676874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-487000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce689c8c-1ae1-4457-9c57-ea6d0e198f19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16597"}}
	{"specversion":"1.0","id":"9066453b-36ea-4802-aff8-4f716cf0fa40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig"}}
	{"specversion":"1.0","id":"b8cd6504-5080-465c-b8eb-0cf5c5f1f386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8ad2f971-4df7-4552-8249-580af4512e1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"affe2663-304e-4720-b256-ab55a5ba1787","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube"}}
	{"specversion":"1.0","id":"e1e30b33-5236-4811-a9df-dca497a4f982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ec0acf92-ac70-4aa2-aca4-a9cd6f74c6ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d0a062e-a920-4ca1-b9ad-9559ae5a5c9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e63f19b8-b14a-4a2e-a315-22c4d7552495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-487000 in cluster json-output-487000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"93e3151f-de7b-4b4d-9115-a43b6ed29719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"64cf0956-a4cb-4fb3-a801-68da90f1d5f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-487000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"8dc6ea5a-5d2a-4274-956a-39aacef4424e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"85d8c426-2e04-44e9-8619-ff069e87136b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"dfa350f0-b2f7-4d46-84ec-5fb38028be27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-487000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"477e7008-e8cd-4ab6-9325-17c2867cfb54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"85aadd98-ec5d-4bce-8265-49577c516655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-487000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-487000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-487000 --output=json --user=testUser: exit status 89 (82.53075ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"88a65ecf-763c-46f6-a0c5-a86c78829a52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control plane node must be running for this command"}}
	{"specversion":"1.0","id":"e806f291-659d-4cf5-87ec-e6b1015c4d6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-487000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-487000 --output=json --user=testUser": exit status 89
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-487000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-487000 --output=json --user=testUser: exit status 89 (47.869709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p json-output-487000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-487000 --output=json --user=testUser": exit status 89
json_output_test.go:213: unable to marshal output: * The control plane node must be running for this command
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-627000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-627000 --driver=qemu2 : exit status 80 (9.851079625s)

                                                
                                                
-- stdout --
	* [first-627000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node first-627000 in cluster first-627000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-627000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-627000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-627000 --driver=qemu2 ": exit status 80
panic.go:522: *** TestMinikubeProfile FAILED at 2023-05-30 13:09:22.166055 -0700 PDT m=+272.501512251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-629000 -n second-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-629000 -n second-629000: exit status 85 (76.401291ms)

                                                
                                                
-- stdout --
	* Profile "second-629000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-629000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-629000" host is not running, skipping log retrieval (state="* Profile \"second-629000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-629000\"")
helpers_test.go:175: Cleaning up "second-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-629000
panic.go:522: *** TestMinikubeProfile FAILED at 2023-05-30 13:09:22.518391 -0700 PDT m=+272.853857334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-627000 -n first-627000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-627000 -n first-627000: exit status 7 (29.271292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-627000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-627000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-627000
--- FAIL: TestMinikubeProfile (10.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-336000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-336000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.087154875s)

                                                
                                                
-- stdout --
	* [mount-start-1-336000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-336000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-336000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-336000 -n mount-start-1-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-336000 -n mount-start-1-336000: exit status 7 (69.107875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-060000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-060000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.708317083s)

                                                
                                                
-- stdout --
	* [multinode-060000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-060000 in cluster multinode-060000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:09:33.205860    7389 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:09:33.206000    7389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:09:33.206003    7389 out.go:309] Setting ErrFile to fd 2...
	I0530 13:09:33.206005    7389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:09:33.206072    7389 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:09:33.207144    7389 out.go:303] Setting JSON to false
	I0530 13:09:33.222260    7389 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4144,"bootTime":1685473229,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:09:33.222334    7389 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:09:33.232329    7389 out.go:177] * [multinode-060000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:09:33.236286    7389 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:09:33.236312    7389 notify.go:220] Checking for updates...
	I0530 13:09:33.243290    7389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:09:33.246228    7389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:09:33.249266    7389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:09:33.252334    7389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:09:33.255307    7389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:09:33.258458    7389 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:09:33.262297    7389 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:09:33.269267    7389 start.go:295] selected driver: qemu2
	I0530 13:09:33.269273    7389 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:09:33.269279    7389 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:09:33.271174    7389 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:09:33.274276    7389 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:09:33.275751    7389 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:09:33.275775    7389 cni.go:84] Creating CNI manager for ""
	I0530 13:09:33.275782    7389 cni.go:136] 0 nodes found, recommending kindnet
	I0530 13:09:33.275790    7389 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0530 13:09:33.275803    7389 start_flags.go:319] config:
	{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:09:33.275886    7389 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:09:33.284279    7389 out.go:177] * Starting control plane node multinode-060000 in cluster multinode-060000
	I0530 13:09:33.288199    7389 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:09:33.288222    7389 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:09:33.288239    7389 cache.go:57] Caching tarball of preloaded images
	I0530 13:09:33.288304    7389 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:09:33.288314    7389 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:09:33.288503    7389 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/multinode-060000/config.json ...
	I0530 13:09:33.288519    7389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/multinode-060000/config.json: {Name:mk04effc9c1f98ab0583844a2e2bfb970bdde7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:09:33.288730    7389 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:09:33.288747    7389 start.go:364] acquiring machines lock for multinode-060000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:09:33.288787    7389 start.go:368] acquired machines lock for "multinode-060000" in 35.125µs
	I0530 13:09:33.288801    7389 start.go:93] Provisioning new machine with config: &{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:09:33.288826    7389 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:09:33.297266    7389 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:09:33.314810    7389 start.go:159] libmachine.API.Create for "multinode-060000" (driver="qemu2")
	I0530 13:09:33.314829    7389 client.go:168] LocalClient.Create starting
	I0530 13:09:33.314883    7389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:09:33.314905    7389 main.go:141] libmachine: Decoding PEM data...
	I0530 13:09:33.314917    7389 main.go:141] libmachine: Parsing certificate...
	I0530 13:09:33.314942    7389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:09:33.314958    7389 main.go:141] libmachine: Decoding PEM data...
	I0530 13:09:33.314965    7389 main.go:141] libmachine: Parsing certificate...
	I0530 13:09:33.315317    7389 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:09:33.430626    7389 main.go:141] libmachine: Creating SSH key...
	I0530 13:09:33.517796    7389 main.go:141] libmachine: Creating Disk image...
	I0530 13:09:33.517804    7389 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:09:33.517945    7389 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:09:33.526310    7389 main.go:141] libmachine: STDOUT: 
	I0530 13:09:33.526325    7389 main.go:141] libmachine: STDERR: 
	I0530 13:09:33.526374    7389 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2 +20000M
	I0530 13:09:33.533567    7389 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:09:33.533596    7389 main.go:141] libmachine: STDERR: 
	I0530 13:09:33.533621    7389 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:09:33.533626    7389 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:09:33.533676    7389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:75:4c:85:e6:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:09:33.535230    7389 main.go:141] libmachine: STDOUT: 
	I0530 13:09:33.535244    7389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:09:33.535266    7389 client.go:171] LocalClient.Create took 220.438584ms
	I0530 13:09:35.537428    7389 start.go:128] duration metric: createHost completed in 2.248620458s
	I0530 13:09:35.537527    7389 start.go:83] releasing machines lock for "multinode-060000", held for 2.2487845s
	W0530 13:09:35.537599    7389 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:09:35.550442    7389 out.go:177] * Deleting "multinode-060000" in qemu2 ...
	W0530 13:09:35.571767    7389 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:09:35.571802    7389 start.go:702] Will try again in 5 seconds ...
	I0530 13:09:40.574005    7389 start.go:364] acquiring machines lock for multinode-060000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:09:40.574604    7389 start.go:368] acquired machines lock for "multinode-060000" in 479.625µs
	I0530 13:09:40.574714    7389 start.go:93] Provisioning new machine with config: &{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:09:40.575011    7389 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:09:40.580907    7389 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:09:40.630377    7389 start.go:159] libmachine.API.Create for "multinode-060000" (driver="qemu2")
	I0530 13:09:40.630408    7389 client.go:168] LocalClient.Create starting
	I0530 13:09:40.630552    7389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:09:40.630595    7389 main.go:141] libmachine: Decoding PEM data...
	I0530 13:09:40.630615    7389 main.go:141] libmachine: Parsing certificate...
	I0530 13:09:40.630703    7389 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:09:40.630731    7389 main.go:141] libmachine: Decoding PEM data...
	I0530 13:09:40.630750    7389 main.go:141] libmachine: Parsing certificate...
	I0530 13:09:40.631251    7389 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:09:40.759402    7389 main.go:141] libmachine: Creating SSH key...
	I0530 13:09:40.826089    7389 main.go:141] libmachine: Creating Disk image...
	I0530 13:09:40.826095    7389 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:09:40.826238    7389 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:09:40.834777    7389 main.go:141] libmachine: STDOUT: 
	I0530 13:09:40.834791    7389 main.go:141] libmachine: STDERR: 
	I0530 13:09:40.834849    7389 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2 +20000M
	I0530 13:09:40.842009    7389 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:09:40.842026    7389 main.go:141] libmachine: STDERR: 
	I0530 13:09:40.842040    7389 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:09:40.842044    7389 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:09:40.842085    7389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:5c:35:ab:52:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:09:40.843569    7389 main.go:141] libmachine: STDOUT: 
	I0530 13:09:40.843583    7389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:09:40.843594    7389 client.go:171] LocalClient.Create took 213.188084ms
	I0530 13:09:42.845701    7389 start.go:128] duration metric: createHost completed in 2.270722125s
	I0530 13:09:42.845762    7389 start.go:83] releasing machines lock for "multinode-060000", held for 2.27119275s
	W0530 13:09:42.846366    7389 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:09:42.857071    7389 out.go:177] 
	W0530 13:09:42.860134    7389 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:09:42.860157    7389 out.go:239] * 
	* 
	W0530 13:09:42.862796    7389 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:09:42.873008    7389 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-060000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (67.233334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (92.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (57.656292ms)

                                                
                                                
** stderr ** 
	W0530 13:09:43.012476    7406 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: cluster "multinode-060000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- rollout status deployment/busybox: exit status 1 (53.882667ms)

                                                
                                                
** stderr ** 
	W0530 13:09:43.066543    7409 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.414084ms)

                                                
                                                
** stderr ** 
	W0530 13:09:43.120975    7412 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.950167ms)

                                                
                                                
** stderr ** 
	W0530 13:09:43.966638    7415 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.171334ms)

                                                
                                                
** stderr ** 
	W0530 13:09:45.137861    7418 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.113542ms)

                                                
                                                
** stderr ** 
	W0530 13:09:48.380747    7421 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.804375ms)

                                                
                                                
** stderr ** 
	W0530 13:09:51.586627    7424 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.012ms)

                                                
                                                
** stderr ** 
	W0530 13:09:54.382506    7427 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.144209ms)

                                                
                                                
** stderr ** 
	W0530 13:10:05.234358    7430 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.160958ms)

                                                
                                                
** stderr ** 
	W0530 13:10:18.004526    7435 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.795584ms)

                                                
                                                
** stderr ** 
	W0530 13:10:38.730339    7438 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.780917ms)

                                                
                                                
** stderr ** 
	W0530 13:11:15.594910    7443 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.000709ms)

                                                
                                                
** stderr ** 
	W0530 13:11:15.649023    7446 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.064375ms)

                                                
                                                
** stderr ** 
	W0530 13:11:15.703222    7449 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.011083ms)

                                                
                                                
** stderr ** 
	W0530 13:11:15.757313    7452 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.630291ms)

                                                
                                                
** stderr ** 
	W0530 13:11:15.812016    7455 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (29.206667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (92.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-060000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.845917ms)

                                                
                                                
** stderr ** 
	W0530 13:11:15.895433    7460 loader.go:222] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: no server found for cluster "multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.96025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-060000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-060000 -v 3 --alsologtostderr: exit status 89 (41.215458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-060000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:15.953267    7463 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:15.953403    7463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:15.953406    7463 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:15.953409    7463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:15.953478    7463 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:15.953712    7463 mustload.go:65] Loading cluster: multinode-060000
	I0530 13:11:15.953883    7463 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:15.958674    7463 out.go:177] * The control plane node must be running for this command
	I0530 13:11:15.962871    7463 out.go:177]   To start a cluster, run: "minikube start -p multinode-060000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-060000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.470916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-060000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-060000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-060000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.2\",\"ClusterName\":\"multinode-060000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.746417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status --output json --alsologtostderr: exit status 7 (28.441417ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-060000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:16.132072    7473 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:16.132195    7473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.132198    7473 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:16.132201    7473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.132269    7473 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:16.132381    7473 out.go:303] Setting JSON to true
	I0530 13:11:16.132392    7473 mustload.go:65] Loading cluster: multinode-060000
	I0530 13:11:16.132445    7473 notify.go:220] Checking for updates...
	I0530 13:11:16.132559    7473 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:16.132565    7473 status.go:255] checking status of multinode-060000 ...
	I0530 13:11:16.132745    7473 status.go:330] multinode-060000 host status = "Stopped" (err=<nil>)
	I0530 13:11:16.132749    7473 status.go:343] host is not running, skipping remaining checks
	I0530 13:11:16.132751    7473 status.go:257] multinode-060000 status: &{Name:multinode-060000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-060000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.578709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 node stop m03: exit status 85 (47.203125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-060000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status: exit status 7 (28.892ms)

                                                
                                                
-- stdout --
	multinode-060000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr: exit status 7 (28.457083ms)

                                                
                                                
-- stdout --
	multinode-060000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:16.265929    7481 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:16.266077    7481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.266080    7481 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:16.266082    7481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.266169    7481 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:16.266283    7481 out.go:303] Setting JSON to false
	I0530 13:11:16.266294    7481 mustload.go:65] Loading cluster: multinode-060000
	I0530 13:11:16.266356    7481 notify.go:220] Checking for updates...
	I0530 13:11:16.266468    7481 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:16.266474    7481 status.go:255] checking status of multinode-060000 ...
	I0530 13:11:16.266653    7481 status.go:330] multinode-060000 host status = "Stopped" (err=<nil>)
	I0530 13:11:16.266656    7481 status.go:343] host is not running, skipping remaining checks
	I0530 13:11:16.266658    7481 status.go:257] multinode-060000 status: &{Name:multinode-060000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr": multinode-060000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 node start m03 --alsologtostderr: exit status 85 (47.539792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:16.323900    7485 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:16.324027    7485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.324030    7485 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:16.324032    7485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.324094    7485 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:16.324361    7485 mustload.go:65] Loading cluster: multinode-060000
	I0530 13:11:16.324536    7485 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:16.329114    7485 out.go:177] 
	W0530 13:11:16.332218    7485 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0530 13:11:16.332223    7485 out.go:239] * 
	* 
	W0530 13:11:16.334728    7485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:11:16.339131    7485 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0530 13:11:16.323900    7485 out.go:296] Setting OutFile to fd 1 ...
I0530 13:11:16.324027    7485 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:11:16.324030    7485 out.go:309] Setting ErrFile to fd 2...
I0530 13:11:16.324032    7485 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0530 13:11:16.324094    7485 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
I0530 13:11:16.324361    7485 mustload.go:65] Loading cluster: multinode-060000
I0530 13:11:16.324536    7485 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0530 13:11:16.329114    7485 out.go:177] 
W0530 13:11:16.332218    7485 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0530 13:11:16.332223    7485 out.go:239] * 
* 
W0530 13:11:16.334728    7485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0530 13:11:16.339131    7485 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-060000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status: exit status 7 (29.258083ms)

                                                
                                                
-- stdout --
	multinode-060000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-060000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.671875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-060000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-060000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-060000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-060000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.200158s)

                                                
                                                
-- stdout --
	* [multinode-060000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-060000 in cluster multinode-060000
	* Restarting existing qemu2 VM for "multinode-060000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-060000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:16.520057    7495 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:16.520204    7495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.520208    7495 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:16.520210    7495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:16.520284    7495 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:16.521595    7495 out.go:303] Setting JSON to false
	I0530 13:11:16.538005    7495 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4247,"bootTime":1685473229,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:11:16.538075    7495 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:11:16.543078    7495 out.go:177] * [multinode-060000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:11:16.556038    7495 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:11:16.553114    7495 notify.go:220] Checking for updates...
	I0530 13:11:16.564087    7495 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:11:16.568076    7495 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:11:16.571095    7495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:11:16.580048    7495 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:11:16.589074    7495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:11:16.594534    7495 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:16.594564    7495 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:11:16.599083    7495 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:11:16.606051    7495 start.go:295] selected driver: qemu2
	I0530 13:11:16.606057    7495 start.go:870] validating driver "qemu2" against &{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:11:16.606168    7495 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:11:16.608378    7495 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:11:16.608400    7495 cni.go:84] Creating CNI manager for ""
	I0530 13:11:16.608410    7495 cni.go:136] 1 nodes found, recommending kindnet
	I0530 13:11:16.608419    7495 start_flags.go:319] config:
	{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:11:16.608503    7495 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:16.616098    7495 out.go:177] * Starting control plane node multinode-060000 in cluster multinode-060000
	I0530 13:11:16.620082    7495 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:11:16.620107    7495 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:11:16.620125    7495 cache.go:57] Caching tarball of preloaded images
	I0530 13:11:16.620199    7495 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:11:16.620205    7495 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:11:16.620271    7495 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/multinode-060000/config.json ...
	I0530 13:11:16.620678    7495 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:11:16.620692    7495 start.go:364] acquiring machines lock for multinode-060000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:11:16.620726    7495 start.go:368] acquired machines lock for "multinode-060000" in 27.958µs
	I0530 13:11:16.620737    7495 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:11:16.620741    7495 fix.go:55] fixHost starting: 
	I0530 13:11:16.620875    7495 fix.go:103] recreateIfNeeded on multinode-060000: state=Stopped err=<nil>
	W0530 13:11:16.620884    7495 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:11:16.629056    7495 out.go:177] * Restarting existing qemu2 VM for "multinode-060000" ...
	I0530 13:11:16.633100    7495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:5c:35:ab:52:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:11:16.635325    7495 main.go:141] libmachine: STDOUT: 
	I0530 13:11:16.635349    7495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:11:16.635383    7495 fix.go:57] fixHost completed within 14.640459ms
	I0530 13:11:16.635390    7495 start.go:83] releasing machines lock for "multinode-060000", held for 14.659084ms
	W0530 13:11:16.635400    7495 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:11:16.635474    7495 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:16.635480    7495 start.go:702] Will try again in 5 seconds ...
	I0530 13:11:21.637472    7495 start.go:364] acquiring machines lock for multinode-060000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:11:21.637784    7495 start.go:368] acquired machines lock for "multinode-060000" in 237.875µs
	I0530 13:11:21.637914    7495 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:11:21.637935    7495 fix.go:55] fixHost starting: 
	I0530 13:11:21.638597    7495 fix.go:103] recreateIfNeeded on multinode-060000: state=Stopped err=<nil>
	W0530 13:11:21.638623    7495 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:11:21.647215    7495 out.go:177] * Restarting existing qemu2 VM for "multinode-060000" ...
	I0530 13:11:21.651274    7495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:5c:35:ab:52:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:11:21.660382    7495 main.go:141] libmachine: STDOUT: 
	I0530 13:11:21.660429    7495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:11:21.660494    7495 fix.go:57] fixHost completed within 22.558875ms
	I0530 13:11:21.660515    7495 start.go:83] releasing machines lock for "multinode-060000", held for 22.697584ms
	W0530 13:11:21.660780    7495 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-060000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-060000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:21.667233    7495 out.go:177] 
	W0530 13:11:21.671335    7495 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:11:21.671358    7495 out.go:239] * 
	* 
	W0530 13:11:21.673241    7495 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:11:21.679149    7495 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-060000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-060000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (32.035875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 node delete m03: exit status 89 (39.087333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-060000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-060000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr: exit status 7 (28.84875ms)

                                                
                                                
-- stdout --
	multinode-060000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:21.857689    7510 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:21.857818    7510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:21.857821    7510 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:21.857823    7510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:21.857890    7510 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:21.858005    7510 out.go:303] Setting JSON to false
	I0530 13:11:21.858016    7510 mustload.go:65] Loading cluster: multinode-060000
	I0530 13:11:21.858082    7510 notify.go:220] Checking for updates...
	I0530 13:11:21.858205    7510 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:21.858210    7510 status.go:255] checking status of multinode-060000 ...
	I0530 13:11:21.858385    7510 status.go:330] multinode-060000 host status = "Stopped" (err=<nil>)
	I0530 13:11:21.858389    7510 status.go:343] host is not running, skipping remaining checks
	I0530 13:11:21.858391    7510 status.go:257] multinode-060000 status: &{Name:multinode-060000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.946375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status: exit status 7 (28.849041ms)

                                                
                                                
-- stdout --
	multinode-060000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr: exit status 7 (29.77125ms)

                                                
                                                
-- stdout --
	multinode-060000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:22.003261    7518 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:22.003402    7518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:22.003405    7518 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:22.003407    7518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:22.003479    7518 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:22.003599    7518 out.go:303] Setting JSON to false
	I0530 13:11:22.003611    7518 mustload.go:65] Loading cluster: multinode-060000
	I0530 13:11:22.003658    7518 notify.go:220] Checking for updates...
	I0530 13:11:22.004616    7518 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:22.004623    7518 status.go:255] checking status of multinode-060000 ...
	I0530 13:11:22.004893    7518 status.go:330] multinode-060000 host status = "Stopped" (err=<nil>)
	I0530 13:11:22.004897    7518 status.go:343] host is not running, skipping remaining checks
	I0530 13:11:22.004900    7518 status.go:257] multinode-060000 status: &{Name:multinode-060000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr": multinode-060000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-060000 status --alsologtostderr": multinode-060000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (28.926333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-060000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-060000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.172135167s)

                                                
                                                
-- stdout --
	* [multinode-060000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-060000 in cluster multinode-060000
	* Restarting existing qemu2 VM for "multinode-060000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-060000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:22.061330    7522 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:22.061440    7522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:22.061444    7522 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:22.061446    7522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:22.061510    7522 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:22.062486    7522 out.go:303] Setting JSON to false
	I0530 13:11:22.078345    7522 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4253,"bootTime":1685473229,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:11:22.078407    7522 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:11:22.083503    7522 out.go:177] * [multinode-060000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:11:22.090436    7522 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:11:22.090462    7522 notify.go:220] Checking for updates...
	I0530 13:11:22.097334    7522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:11:22.100410    7522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:11:22.103471    7522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:11:22.106404    7522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:11:22.109420    7522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:11:22.112664    7522 config.go:182] Loaded profile config "multinode-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:22.112891    7522 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:11:22.116356    7522 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:11:22.123421    7522 start.go:295] selected driver: qemu2
	I0530 13:11:22.123428    7522 start.go:870] validating driver "qemu2" against &{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:11:22.123492    7522 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:11:22.125407    7522 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:11:22.125429    7522 cni.go:84] Creating CNI manager for ""
	I0530 13:11:22.125434    7522 cni.go:136] 1 nodes found, recommending kindnet
	I0530 13:11:22.125442    7522 start_flags.go:319] config:
	{Name:multinode-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-060000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:11:22.125520    7522 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:22.134376    7522 out.go:177] * Starting control plane node multinode-060000 in cluster multinode-060000
	I0530 13:11:22.138415    7522 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:11:22.138433    7522 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:11:22.138445    7522 cache.go:57] Caching tarball of preloaded images
	I0530 13:11:22.138514    7522 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:11:22.138520    7522 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:11:22.138584    7522 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/multinode-060000/config.json ...
	I0530 13:11:22.138975    7522 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:11:22.138987    7522 start.go:364] acquiring machines lock for multinode-060000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:11:22.139014    7522 start.go:368] acquired machines lock for "multinode-060000" in 22µs
	I0530 13:11:22.139024    7522 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:11:22.139027    7522 fix.go:55] fixHost starting: 
	I0530 13:11:22.139141    7522 fix.go:103] recreateIfNeeded on multinode-060000: state=Stopped err=<nil>
	W0530 13:11:22.139150    7522 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:11:22.146439    7522 out.go:177] * Restarting existing qemu2 VM for "multinode-060000" ...
	I0530 13:11:22.150446    7522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:5c:35:ab:52:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:11:22.152398    7522 main.go:141] libmachine: STDOUT: 
	I0530 13:11:22.152420    7522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:11:22.152448    7522 fix.go:57] fixHost completed within 13.419667ms
	I0530 13:11:22.152453    7522 start.go:83] releasing machines lock for "multinode-060000", held for 13.435209ms
	W0530 13:11:22.152465    7522 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:11:22.152525    7522 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:22.152530    7522 start.go:702] Will try again in 5 seconds ...
	I0530 13:11:27.154517    7522 start.go:364] acquiring machines lock for multinode-060000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:11:27.154971    7522 start.go:368] acquired machines lock for "multinode-060000" in 374.667µs
	I0530 13:11:27.155097    7522 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:11:27.155117    7522 fix.go:55] fixHost starting: 
	I0530 13:11:27.155813    7522 fix.go:103] recreateIfNeeded on multinode-060000: state=Stopped err=<nil>
	W0530 13:11:27.155838    7522 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:11:27.159911    7522 out.go:177] * Restarting existing qemu2 VM for "multinode-060000" ...
	I0530 13:11:27.163879    7522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:5c:35:ab:52:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/multinode-060000/disk.qcow2
	I0530 13:11:27.173169    7522 main.go:141] libmachine: STDOUT: 
	I0530 13:11:27.173237    7522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:11:27.173318    7522 fix.go:57] fixHost completed within 18.203417ms
	I0530 13:11:27.173335    7522 start.go:83] releasing machines lock for "multinode-060000", held for 18.345166ms
	W0530 13:11:27.173649    7522 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-060000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-060000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:27.179720    7522 out.go:177] 
	W0530 13:11:27.183894    7522 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:11:27.183923    7522 out.go:239] * 
	* 
	W0530 13:11:27.186484    7522 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:11:27.194732    7522 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-060000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (70.266959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-060000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-060000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-060000-m01 --driver=qemu2 : exit status 80 (10.017734084s)

                                                
                                                
-- stdout --
	* [multinode-060000-m01] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-060000-m01 in cluster multinode-060000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-060000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-060000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-060000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-060000-m02 --driver=qemu2 : exit status 80 (9.930424708s)

                                                
                                                
-- stdout --
	* [multinode-060000-m02] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-060000-m02 in cluster multinode-060000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-060000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-060000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-060000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-060000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-060000: exit status 89 (79.74975ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-060000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-060000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-060000 -n multinode-060000: exit status 7 (30.880458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-060000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.20s)

                                                
                                    
x
+
TestPreload (10s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-124000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-124000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.824916542s)

                                                
                                                
-- stdout --
	* [test-preload-124000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-124000 in cluster test-preload-124000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-124000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:11:47.640059    7576 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:11:47.640453    7576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:47.640458    7576 out.go:309] Setting ErrFile to fd 2...
	I0530 13:11:47.640461    7576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:11:47.640568    7576 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:11:47.642162    7576 out.go:303] Setting JSON to false
	I0530 13:11:47.657523    7576 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4278,"bootTime":1685473229,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:11:47.657586    7576 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:11:47.662055    7576 out.go:177] * [test-preload-124000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:11:47.669176    7576 notify.go:220] Checking for updates...
	I0530 13:11:47.673065    7576 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:11:47.676162    7576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:11:47.679143    7576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:11:47.682076    7576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:11:47.685115    7576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:11:47.688132    7576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:11:47.689788    7576 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:11:47.689811    7576 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:11:47.694088    7576 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:11:47.700919    7576 start.go:295] selected driver: qemu2
	I0530 13:11:47.700931    7576 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:11:47.700939    7576 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:11:47.702799    7576 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:11:47.706090    7576 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:11:47.709369    7576 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:11:47.709388    7576 cni.go:84] Creating CNI manager for ""
	I0530 13:11:47.709402    7576 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:11:47.709406    7576 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:11:47.709411    7576 start_flags.go:319] config:
	{Name:test-preload-124000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-124000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:11:47.709486    7576 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.718096    7576 out.go:177] * Starting control plane node test-preload-124000 in cluster test-preload-124000
	I0530 13:11:47.722125    7576 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0530 13:11:47.722197    7576 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/test-preload-124000/config.json ...
	I0530 13:11:47.722221    7576 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/test-preload-124000/config.json: {Name:mka27a2a9824e173457361b261bfa536a627942b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:11:47.722251    7576 cache.go:107] acquiring lock: {Name:mk1f7e1161855fb214230a0b223d520a4ca2b6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722264    7576 cache.go:107] acquiring lock: {Name:mkdaf9ebd94f302feaa737652e0c2c08747f5380 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722279    7576 cache.go:107] acquiring lock: {Name:mkde7360941e403085c67c516a939b7bf923d66e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722390    7576 cache.go:107] acquiring lock: {Name:mkb22ae9ddc8fc9817f05b3f5802661994ab1774 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722465    7576 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 13:11:47.722466    7576 cache.go:107] acquiring lock: {Name:mkf0b5526c1cf5dde50508058f2d28e31b6fe32c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722498    7576 cache.go:107] acquiring lock: {Name:mk41caebc7ff26a26b545871fc34c82377dcea85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722556    7576 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0530 13:11:47.722558    7576 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0530 13:11:47.722512    7576 cache.go:107] acquiring lock: {Name:mkccc0b9527dd8984ef4d70dd4918f0c708807d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722636    7576 cache.go:107] acquiring lock: {Name:mk49a48796b993ad9018628d29751e5d61ac7d8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:11:47.722655    7576 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:11:47.722670    7576 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0530 13:11:47.722671    7576 start.go:364] acquiring machines lock for test-preload-124000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:11:47.722707    7576 start.go:368] acquired machines lock for "test-preload-124000" in 29.167µs
	I0530 13:11:47.722750    7576 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0530 13:11:47.722721    7576 start.go:93] Provisioning new machine with config: &{Name:test-preload-124000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-124000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:11:47.722761    7576 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:11:47.722783    7576 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0530 13:11:47.722802    7576 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0530 13:11:47.722809    7576 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0530 13:11:47.731080    7576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:11:47.747594    7576 start.go:159] libmachine.API.Create for "test-preload-124000" (driver="qemu2")
	I0530 13:11:47.747612    7576 client.go:168] LocalClient.Create starting
	I0530 13:11:47.747668    7576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:11:47.747688    7576 main.go:141] libmachine: Decoding PEM data...
	I0530 13:11:47.747699    7576 main.go:141] libmachine: Parsing certificate...
	I0530 13:11:47.747734    7576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:11:47.747750    7576 main.go:141] libmachine: Decoding PEM data...
	I0530 13:11:47.747758    7576 main.go:141] libmachine: Parsing certificate...
	I0530 13:11:47.748080    7576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:11:47.748611    7576 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0530 13:11:47.749990    7576 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0530 13:11:47.750097    7576 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0530 13:11:47.750156    7576 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0530 13:11:47.750210    7576 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0530 13:11:47.751265    7576 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0530 13:11:47.752691    7576 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0530 13:11:47.752781    7576 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0530 13:11:47.866959    7576 main.go:141] libmachine: Creating SSH key...
	I0530 13:11:47.953493    7576 main.go:141] libmachine: Creating Disk image...
	I0530 13:11:47.953510    7576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:11:47.953688    7576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2
	I0530 13:11:47.962794    7576 main.go:141] libmachine: STDOUT: 
	I0530 13:11:47.962812    7576 main.go:141] libmachine: STDERR: 
	I0530 13:11:47.962885    7576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2 +20000M
	I0530 13:11:47.971192    7576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:11:47.971212    7576 main.go:141] libmachine: STDERR: 
	I0530 13:11:47.971236    7576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2
	I0530 13:11:47.971245    7576 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:11:47.971283    7576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:5c:1f:41:c9:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2
	I0530 13:11:47.972986    7576 main.go:141] libmachine: STDOUT: 
	I0530 13:11:47.972998    7576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:11:47.973015    7576 client.go:171] LocalClient.Create took 225.403208ms
	W0530 13:11:48.740278    7576 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0530 13:11:48.740305    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0530 13:11:49.170039    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0530 13:11:49.208664    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0530 13:11:49.210581    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0530 13:11:49.210592    7576 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.488381875s
	I0530 13:11:49.210602    7576 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0530 13:11:49.255745    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0530 13:11:49.380476    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0530 13:11:49.424237    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0530 13:11:49.667155    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0530 13:11:49.809198    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0530 13:11:49.809267    7576 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.08689875s
	I0530 13:11:49.809301    7576 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0530 13:11:49.844075    7576 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0530 13:11:49.844173    7576 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0530 13:11:49.973507    7576 start.go:128] duration metric: createHost completed in 2.25077275s
	I0530 13:11:49.973559    7576 start.go:83] releasing machines lock for "test-preload-124000", held for 2.250899292s
	W0530 13:11:49.973616    7576 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:49.986848    7576 out.go:177] * Deleting "test-preload-124000" in qemu2 ...
	W0530 13:11:50.006618    7576 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:50.006645    7576 start.go:702] Will try again in 5 seconds ...
	I0530 13:11:51.407046    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0530 13:11:51.407088    7576 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.684684333s
	I0530 13:11:51.407118    7576 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0530 13:11:52.208484    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0530 13:11:52.208532    7576 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.486208417s
	I0530 13:11:52.208558    7576 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0530 13:11:52.419449    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0530 13:11:52.419502    7576 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.697363167s
	I0530 13:11:52.419530    7576 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0530 13:11:52.682794    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0530 13:11:52.682863    7576 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.960718125s
	I0530 13:11:52.682896    7576 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0530 13:11:54.008451    7576 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0530 13:11:54.008497    7576 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.286011791s
	I0530 13:11:54.008525    7576 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0530 13:11:55.006815    7576 start.go:364] acquiring machines lock for test-preload-124000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:11:55.007295    7576 start.go:368] acquired machines lock for "test-preload-124000" in 390.583µs
	I0530 13:11:55.007393    7576 start.go:93] Provisioning new machine with config: &{Name:test-preload-124000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-124000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:11:55.007617    7576 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:11:55.016303    7576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:11:55.064297    7576 start.go:159] libmachine.API.Create for "test-preload-124000" (driver="qemu2")
	I0530 13:11:55.064341    7576 client.go:168] LocalClient.Create starting
	I0530 13:11:55.064527    7576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:11:55.064609    7576 main.go:141] libmachine: Decoding PEM data...
	I0530 13:11:55.064633    7576 main.go:141] libmachine: Parsing certificate...
	I0530 13:11:55.064773    7576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:11:55.064812    7576 main.go:141] libmachine: Decoding PEM data...
	I0530 13:11:55.064828    7576 main.go:141] libmachine: Parsing certificate...
	I0530 13:11:55.065373    7576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:11:55.194141    7576 main.go:141] libmachine: Creating SSH key...
	I0530 13:11:55.376541    7576 main.go:141] libmachine: Creating Disk image...
	I0530 13:11:55.376550    7576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:11:55.376728    7576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2
	I0530 13:11:55.385758    7576 main.go:141] libmachine: STDOUT: 
	I0530 13:11:55.385778    7576 main.go:141] libmachine: STDERR: 
	I0530 13:11:55.385845    7576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2 +20000M
	I0530 13:11:55.393326    7576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:11:55.393340    7576 main.go:141] libmachine: STDERR: 
	I0530 13:11:55.393360    7576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2
	I0530 13:11:55.393369    7576 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:11:55.393408    7576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:7b:0c:95:15:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/test-preload-124000/disk.qcow2
	I0530 13:11:55.394869    7576 main.go:141] libmachine: STDOUT: 
	I0530 13:11:55.394882    7576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:11:55.394894    7576 client.go:171] LocalClient.Create took 330.557125ms
	I0530 13:11:57.395063    7576 start.go:128] duration metric: createHost completed in 2.387448667s
	I0530 13:11:57.395111    7576 start.go:83] releasing machines lock for "test-preload-124000", held for 2.387853125s
	W0530 13:11:57.395560    7576 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-124000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-124000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:11:57.408078    7576 out.go:177] 
	W0530 13:11:57.411243    7576 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:11:57.411300    7576 out.go:239] * 
	* 
	W0530 13:11:57.413736    7576 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:11:57.424102    7576 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-124000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-05-30 13:11:57.439635 -0700 PDT m=+427.778924126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-124000 -n test-preload-124000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-124000 -n test-preload-124000: exit status 7 (65.549541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-124000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-124000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-124000
--- FAIL: TestPreload (10.00s)

                                                
                                    
x
+
TestScheduledStopUnix (9.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-113000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-113000 --memory=2048 --driver=qemu2 : exit status 80 (9.775800167s)

                                                
                                                
-- stdout --
	* [scheduled-stop-113000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-113000 in cluster scheduled-stop-113000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-113000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-113000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-113000 in cluster scheduled-stop-113000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-113000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-05-30 13:12:07.388439 -0700 PDT m=+437.727974001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-113000 -n scheduled-stop-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-113000 -n scheduled-stop-113000: exit status 7 (68.896208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-113000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-113000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-113000
--- FAIL: TestScheduledStopUnix (9.95s)

                                                
                                    
x
+
TestSkaffold (14.53s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3042050848 version
skaffold_test.go:63: skaffold version: v2.5.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-298000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-298000 --memory=2600 --driver=qemu2 : exit status 80 (9.825324791s)

                                                
                                                
-- stdout --
	* [skaffold-298000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-298000 in cluster skaffold-298000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-298000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-298000 in cluster skaffold-298000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-05-30 13:12:21.927998 -0700 PDT m=+452.267891542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-298000 -n skaffold-298000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-298000 -n skaffold-298000: exit status 7 (61.719459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-298000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-298000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-298000
--- FAIL: TestSkaffold (14.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (167.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-30 13:15:49.478604 -0700 PDT m=+659.815133417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-736000 -n running-upgrade-736000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-736000 -n running-upgrade-736000: exit status 85 (86.735125ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-736000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-736000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-736000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-736000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-736000\"")
helpers_test.go:175: Cleaning up "running-upgrade-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-736000
--- FAIL: TestRunningBinaryUpgrade (167.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.895845125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-838000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-838000 in cluster kubernetes-upgrade-838000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:15:49.864005    8044 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:15:49.864142    8044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:15:49.864144    8044 out.go:309] Setting ErrFile to fd 2...
	I0530 13:15:49.864147    8044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:15:49.864208    8044 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:15:49.865258    8044 out.go:303] Setting JSON to false
	I0530 13:15:49.880384    8044 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4520,"bootTime":1685473229,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:15:49.880465    8044 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:15:49.884869    8044 out.go:177] * [kubernetes-upgrade-838000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:15:49.892068    8044 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:15:49.892169    8044 notify.go:220] Checking for updates...
	I0530 13:15:49.894933    8044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:15:49.897981    8044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:15:49.901023    8044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:15:49.902396    8044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:15:49.905029    8044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:15:49.908381    8044 config.go:182] Loaded profile config "cert-expiration-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:15:49.908452    8044 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:15:49.908475    8044 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:15:49.912852    8044 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:15:49.919994    8044 start.go:295] selected driver: qemu2
	I0530 13:15:49.920001    8044 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:15:49.920006    8044 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:15:49.921816    8044 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:15:49.925015    8044 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:15:49.928045    8044 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0530 13:15:49.928061    8044 cni.go:84] Creating CNI manager for ""
	I0530 13:15:49.928071    8044 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0530 13:15:49.928082    8044 start_flags.go:319] config:
	{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:15:49.928168    8044 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:15:49.936921    8044 out.go:177] * Starting control plane node kubernetes-upgrade-838000 in cluster kubernetes-upgrade-838000
	I0530 13:15:49.940980    8044 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:15:49.941002    8044 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:15:49.941012    8044 cache.go:57] Caching tarball of preloaded images
	I0530 13:15:49.941080    8044 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:15:49.941084    8044 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0530 13:15:49.941138    8044 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubernetes-upgrade-838000/config.json ...
	I0530 13:15:49.941149    8044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubernetes-upgrade-838000/config.json: {Name:mk7796bfc560ba7ec765ee3e129ae368454994cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:15:49.941352    8044 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:15:49.941366    8044 start.go:364] acquiring machines lock for kubernetes-upgrade-838000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:15:49.941394    8044 start.go:368] acquired machines lock for "kubernetes-upgrade-838000" in 23.084µs
	I0530 13:15:49.941409    8044 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:15:49.941435    8044 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:15:49.949997    8044 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:15:49.966689    8044 start.go:159] libmachine.API.Create for "kubernetes-upgrade-838000" (driver="qemu2")
	I0530 13:15:49.966713    8044 client.go:168] LocalClient.Create starting
	I0530 13:15:49.966780    8044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:15:49.966803    8044 main.go:141] libmachine: Decoding PEM data...
	I0530 13:15:49.966813    8044 main.go:141] libmachine: Parsing certificate...
	I0530 13:15:49.966853    8044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:15:49.966874    8044 main.go:141] libmachine: Decoding PEM data...
	I0530 13:15:49.966880    8044 main.go:141] libmachine: Parsing certificate...
	I0530 13:15:49.967461    8044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:15:50.083290    8044 main.go:141] libmachine: Creating SSH key...
	I0530 13:15:50.322597    8044 main.go:141] libmachine: Creating Disk image...
	I0530 13:15:50.322608    8044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:15:50.322799    8044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:50.332260    8044 main.go:141] libmachine: STDOUT: 
	I0530 13:15:50.332276    8044 main.go:141] libmachine: STDERR: 
	I0530 13:15:50.332332    8044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2 +20000M
	I0530 13:15:50.339590    8044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:15:50.339604    8044 main.go:141] libmachine: STDERR: 
	I0530 13:15:50.339617    8044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:50.339626    8044 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:15:50.339666    8044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9d:7e:02:1c:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:50.341240    8044 main.go:141] libmachine: STDOUT: 
	I0530 13:15:50.341256    8044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:15:50.341274    8044 client.go:171] LocalClient.Create took 374.564ms
	I0530 13:15:52.343550    8044 start.go:128] duration metric: createHost completed in 2.402105667s
	I0530 13:15:52.343647    8044 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 2.402294333s
	W0530 13:15:52.343705    8044 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:15:52.355197    8044 out.go:177] * Deleting "kubernetes-upgrade-838000" in qemu2 ...
	W0530 13:15:52.376733    8044 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:15:52.376763    8044 start.go:702] Will try again in 5 seconds ...
	I0530 13:15:57.378931    8044 start.go:364] acquiring machines lock for kubernetes-upgrade-838000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:15:57.379260    8044 start.go:368] acquired machines lock for "kubernetes-upgrade-838000" in 245.375µs
	I0530 13:15:57.379371    8044 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:15:57.379637    8044 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:15:57.389415    8044 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:15:57.436276    8044 start.go:159] libmachine.API.Create for "kubernetes-upgrade-838000" (driver="qemu2")
	I0530 13:15:57.436311    8044 client.go:168] LocalClient.Create starting
	I0530 13:15:57.436413    8044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:15:57.436453    8044 main.go:141] libmachine: Decoding PEM data...
	I0530 13:15:57.436478    8044 main.go:141] libmachine: Parsing certificate...
	I0530 13:15:57.436571    8044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:15:57.436599    8044 main.go:141] libmachine: Decoding PEM data...
	I0530 13:15:57.436616    8044 main.go:141] libmachine: Parsing certificate...
	I0530 13:15:57.437105    8044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:15:57.565082    8044 main.go:141] libmachine: Creating SSH key...
	I0530 13:15:57.673008    8044 main.go:141] libmachine: Creating Disk image...
	I0530 13:15:57.673017    8044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:15:57.673162    8044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:57.681838    8044 main.go:141] libmachine: STDOUT: 
	I0530 13:15:57.681855    8044 main.go:141] libmachine: STDERR: 
	I0530 13:15:57.681909    8044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2 +20000M
	I0530 13:15:57.689088    8044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:15:57.689117    8044 main.go:141] libmachine: STDERR: 
	I0530 13:15:57.689137    8044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:57.689145    8044 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:15:57.689188    8044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:20:3e:6d:b0:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:57.690715    8044 main.go:141] libmachine: STDOUT: 
	I0530 13:15:57.690729    8044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:15:57.690741    8044 client.go:171] LocalClient.Create took 254.431458ms
	I0530 13:15:59.692868    8044 start.go:128] duration metric: createHost completed in 2.313256917s
	I0530 13:15:59.692922    8044 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 2.31368525s
	W0530 13:15:59.693745    8044 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:15:59.702213    8044 out.go:177] 
	W0530 13:15:59.707503    8044 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:15:59.707531    8044 out.go:239] * 
	* 
	W0530 13:15:59.710184    8044 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:15:59.723346    8044 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-838000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-838000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-838000 status --format={{.Host}}: exit status 7 (34.587458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.172785667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-838000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-838000 in cluster kubernetes-upgrade-838000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:15:59.896613    8073 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:15:59.896719    8073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:15:59.896721    8073 out.go:309] Setting ErrFile to fd 2...
	I0530 13:15:59.896731    8073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:15:59.896808    8073 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:15:59.897781    8073 out.go:303] Setting JSON to false
	I0530 13:15:59.912786    8073 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4530,"bootTime":1685473229,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:15:59.912859    8073 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:15:59.917747    8073 out.go:177] * [kubernetes-upgrade-838000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:15:59.920757    8073 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:15:59.920818    8073 notify.go:220] Checking for updates...
	I0530 13:15:59.928726    8073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:15:59.931759    8073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:15:59.934764    8073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:15:59.937676    8073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:15:59.940755    8073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:15:59.943963    8073 config.go:182] Loaded profile config "kubernetes-upgrade-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0530 13:15:59.944220    8073 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:15:59.948675    8073 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:15:59.955740    8073 start.go:295] selected driver: qemu2
	I0530 13:15:59.955746    8073 start.go:870] validating driver "qemu2" against &{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:15:59.955828    8073 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:15:59.957658    8073 cni.go:84] Creating CNI manager for ""
	I0530 13:15:59.957674    8073 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:15:59.957681    8073 start_flags.go:319] config:
	{Name:kubernetes-upgrade-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-838000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:15:59.957745    8073 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:15:59.965766    8073 out.go:177] * Starting control plane node kubernetes-upgrade-838000 in cluster kubernetes-upgrade-838000
	I0530 13:15:59.969529    8073 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:15:59.969546    8073 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:15:59.969559    8073 cache.go:57] Caching tarball of preloaded images
	I0530 13:15:59.969621    8073 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:15:59.969627    8073 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:15:59.969685    8073 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubernetes-upgrade-838000/config.json ...
	I0530 13:15:59.970037    8073 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:15:59.970049    8073 start.go:364] acquiring machines lock for kubernetes-upgrade-838000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:15:59.970076    8073 start.go:368] acquired machines lock for "kubernetes-upgrade-838000" in 22.708µs
	I0530 13:15:59.970086    8073 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:15:59.970089    8073 fix.go:55] fixHost starting: 
	I0530 13:15:59.970191    8073 fix.go:103] recreateIfNeeded on kubernetes-upgrade-838000: state=Stopped err=<nil>
	W0530 13:15:59.970199    8073 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:15:59.977727    8073 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	I0530 13:15:59.981742    8073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:20:3e:6d:b0:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:15:59.983531    8073 main.go:141] libmachine: STDOUT: 
	I0530 13:15:59.983548    8073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:15:59.983574    8073 fix.go:57] fixHost completed within 13.484333ms
	I0530 13:15:59.983579    8073 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 13.499541ms
	W0530 13:15:59.983586    8073 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:15:59.983636    8073 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:15:59.983641    8073 start.go:702] Will try again in 5 seconds ...
	I0530 13:16:04.985723    8073 start.go:364] acquiring machines lock for kubernetes-upgrade-838000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:16:04.986167    8073 start.go:368] acquired machines lock for "kubernetes-upgrade-838000" in 310.125µs
	I0530 13:16:04.986361    8073 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:16:04.986382    8073 fix.go:55] fixHost starting: 
	I0530 13:16:04.987110    8073 fix.go:103] recreateIfNeeded on kubernetes-upgrade-838000: state=Stopped err=<nil>
	W0530 13:16:04.987136    8073 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:16:04.991845    8073 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-838000" ...
	I0530 13:16:04.998672    8073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:20:3e:6d:b0:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubernetes-upgrade-838000/disk.qcow2
	I0530 13:16:05.009802    8073 main.go:141] libmachine: STDOUT: 
	I0530 13:16:05.009910    8073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:16:05.010009    8073 fix.go:57] fixHost completed within 23.629958ms
	I0530 13:16:05.010027    8073 start.go:83] releasing machines lock for "kubernetes-upgrade-838000", held for 23.840625ms
	W0530 13:16:05.010419    8073 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-838000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:16:05.018789    8073 out.go:177] 
	W0530 13:16:05.021769    8073 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:16:05.021798    8073 out.go:239] * 
	* 
	W0530 13:16:05.023040    8073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:16:05.031763    8073 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-838000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-838000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-838000 version --output=json: exit status 1 (65.900333ms)

                                                
                                                
** stderr ** 
	W0530 13:16:05.110191    8089 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "kubernetes-upgrade-838000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-05-30 13:16:05.110988 -0700 PDT m=+675.447859501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-838000 -n kubernetes-upgrade-838000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-838000 -n kubernetes-upgrade-838000: exit status 7 (33.144583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-838000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-838000
--- FAIL: TestKubernetesUpgrade (15.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16597
- KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3418296371/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.20s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16597
- KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2156668422/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (141.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (141.29s)

                                                
                                    
x
+
TestPause/serial/Start (9.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-008000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-008000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.824037917s)

                                                
                                                
-- stdout --
	* [pause-008000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-008000 in cluster pause-008000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-008000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-008000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-008000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-008000 -n pause-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-008000 -n pause-008000: exit status 7 (69.601042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 : exit status 80 (9.681545875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-040000 in cluster NoKubernetes-040000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (69.993083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 : exit status 80 (5.297961458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-040000
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (68.559125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 : exit status 80 (5.295250708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-040000
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (67.956792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 : exit status 80 (5.290639125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-040000
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-040000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-040000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-040000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-040000 -n NoKubernetes-040000: exit status 7 (65.628417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-040000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.754104583s)

                                                
                                                
-- stdout --
	* [auto-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-013000 in cluster auto-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:16:41.644584    8179 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:16:41.644742    8179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:16:41.644745    8179 out.go:309] Setting ErrFile to fd 2...
	I0530 13:16:41.644747    8179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:16:41.644814    8179 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:16:41.645869    8179 out.go:303] Setting JSON to false
	I0530 13:16:41.661117    8179 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4572,"bootTime":1685473229,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:16:41.661196    8179 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:16:41.666381    8179 out.go:177] * [auto-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:16:41.673252    8179 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:16:41.673314    8179 notify.go:220] Checking for updates...
	I0530 13:16:41.680193    8179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:16:41.683299    8179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:16:41.686323    8179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:16:41.689320    8179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:16:41.692285    8179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:16:41.695690    8179 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:16:41.695708    8179 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:16:41.700259    8179 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:16:41.707261    8179 start.go:295] selected driver: qemu2
	I0530 13:16:41.707268    8179 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:16:41.707274    8179 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:16:41.709152    8179 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:16:41.712244    8179 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:16:41.715327    8179 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:16:41.715346    8179 cni.go:84] Creating CNI manager for ""
	I0530 13:16:41.715356    8179 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:16:41.715360    8179 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:16:41.715373    8179 start_flags.go:319] config:
	{Name:auto-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:auto-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:16:41.715447    8179 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:16:41.724204    8179 out.go:177] * Starting control plane node auto-013000 in cluster auto-013000
	I0530 13:16:41.728296    8179 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:16:41.728325    8179 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:16:41.728340    8179 cache.go:57] Caching tarball of preloaded images
	I0530 13:16:41.728412    8179 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:16:41.728417    8179 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:16:41.728474    8179 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/auto-013000/config.json ...
	I0530 13:16:41.728488    8179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/auto-013000/config.json: {Name:mk189ecf7263b14a2b43e397a1a5d05bd6e7a369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:16:41.728697    8179 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:16:41.728716    8179 start.go:364] acquiring machines lock for auto-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:16:41.728747    8179 start.go:368] acquired machines lock for "auto-013000" in 26.458µs
	I0530 13:16:41.728766    8179 start.go:93] Provisioning new machine with config: &{Name:auto-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:16:41.728796    8179 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:16:41.737265    8179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:16:41.754232    8179 start.go:159] libmachine.API.Create for "auto-013000" (driver="qemu2")
	I0530 13:16:41.754255    8179 client.go:168] LocalClient.Create starting
	I0530 13:16:41.754311    8179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:16:41.754331    8179 main.go:141] libmachine: Decoding PEM data...
	I0530 13:16:41.754339    8179 main.go:141] libmachine: Parsing certificate...
	I0530 13:16:41.754360    8179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:16:41.754374    8179 main.go:141] libmachine: Decoding PEM data...
	I0530 13:16:41.754383    8179 main.go:141] libmachine: Parsing certificate...
	I0530 13:16:41.754697    8179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:16:41.869575    8179 main.go:141] libmachine: Creating SSH key...
	I0530 13:16:42.015943    8179 main.go:141] libmachine: Creating Disk image...
	I0530 13:16:42.015955    8179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:16:42.016125    8179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2
	I0530 13:16:42.024819    8179 main.go:141] libmachine: STDOUT: 
	I0530 13:16:42.024836    8179 main.go:141] libmachine: STDERR: 
	I0530 13:16:42.024888    8179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2 +20000M
	I0530 13:16:42.031919    8179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:16:42.031948    8179 main.go:141] libmachine: STDERR: 
	I0530 13:16:42.031968    8179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2
	I0530 13:16:42.031973    8179 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:16:42.032016    8179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:fe:01:50:04:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2
	I0530 13:16:42.033476    8179 main.go:141] libmachine: STDOUT: 
	I0530 13:16:42.033488    8179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:16:42.033510    8179 client.go:171] LocalClient.Create took 279.254458ms
	I0530 13:16:44.035625    8179 start.go:128] duration metric: createHost completed in 2.306860625s
	I0530 13:16:44.035685    8179 start.go:83] releasing machines lock for "auto-013000", held for 2.306980125s
	W0530 13:16:44.035738    8179 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:16:44.047386    8179 out.go:177] * Deleting "auto-013000" in qemu2 ...
	W0530 13:16:44.070837    8179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:16:44.070865    8179 start.go:702] Will try again in 5 seconds ...
	I0530 13:16:49.073090    8179 start.go:364] acquiring machines lock for auto-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:16:49.073681    8179 start.go:368] acquired machines lock for "auto-013000" in 469.292µs
	I0530 13:16:49.073801    8179 start.go:93] Provisioning new machine with config: &{Name:auto-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:16:49.074092    8179 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:16:49.079903    8179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:16:49.123220    8179 start.go:159] libmachine.API.Create for "auto-013000" (driver="qemu2")
	I0530 13:16:49.123257    8179 client.go:168] LocalClient.Create starting
	I0530 13:16:49.123391    8179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:16:49.123432    8179 main.go:141] libmachine: Decoding PEM data...
	I0530 13:16:49.123457    8179 main.go:141] libmachine: Parsing certificate...
	I0530 13:16:49.123548    8179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:16:49.123587    8179 main.go:141] libmachine: Decoding PEM data...
	I0530 13:16:49.123605    8179 main.go:141] libmachine: Parsing certificate...
	I0530 13:16:49.124153    8179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:16:49.250838    8179 main.go:141] libmachine: Creating SSH key...
	I0530 13:16:49.311316    8179 main.go:141] libmachine: Creating Disk image...
	I0530 13:16:49.311322    8179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:16:49.311471    8179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2
	I0530 13:16:49.320060    8179 main.go:141] libmachine: STDOUT: 
	I0530 13:16:49.320075    8179 main.go:141] libmachine: STDERR: 
	I0530 13:16:49.320134    8179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2 +20000M
	I0530 13:16:49.327265    8179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:16:49.327279    8179 main.go:141] libmachine: STDERR: 
	I0530 13:16:49.327292    8179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2
	I0530 13:16:49.327297    8179 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:16:49.327340    8179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ea:81:fe:92:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/auto-013000/disk.qcow2
	I0530 13:16:49.328802    8179 main.go:141] libmachine: STDOUT: 
	I0530 13:16:49.328817    8179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:16:49.328828    8179 client.go:171] LocalClient.Create took 205.57125ms
	I0530 13:16:51.331037    8179 start.go:128] duration metric: createHost completed in 2.256959875s
	I0530 13:16:51.331100    8179 start.go:83] releasing machines lock for "auto-013000", held for 2.257446125s
	W0530 13:16:51.331744    8179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:16:51.341438    8179 out.go:177] 
	W0530 13:16:51.345549    8179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:16:51.345577    8179 out.go:239] * 
	* 
	W0530 13:16:51.347981    8179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:16:51.357359    8179 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.756532667s)

                                                
                                                
-- stdout --
	* [calico-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-013000 in cluster calico-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:16:53.546571    8290 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:16:53.546705    8290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:16:53.546708    8290 out.go:309] Setting ErrFile to fd 2...
	I0530 13:16:53.546711    8290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:16:53.546778    8290 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:16:53.547861    8290 out.go:303] Setting JSON to false
	I0530 13:16:53.562849    8290 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4584,"bootTime":1685473229,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:16:53.562928    8290 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:16:53.568136    8290 out.go:177] * [calico-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:16:53.571161    8290 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:16:53.571220    8290 notify.go:220] Checking for updates...
	I0530 13:16:53.579095    8290 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:16:53.582148    8290 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:16:53.585080    8290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:16:53.588075    8290 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:16:53.591154    8290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:16:53.592922    8290 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:16:53.592943    8290 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:16:53.597004    8290 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:16:53.603896    8290 start.go:295] selected driver: qemu2
	I0530 13:16:53.603905    8290 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:16:53.603915    8290 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:16:53.605728    8290 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:16:53.609036    8290 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:16:53.612189    8290 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:16:53.612207    8290 cni.go:84] Creating CNI manager for "calico"
	I0530 13:16:53.612210    8290 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0530 13:16:53.612218    8290 start_flags.go:319] config:
	{Name:calico-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:calico-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:16:53.612290    8290 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:16:53.621075    8290 out.go:177] * Starting control plane node calico-013000 in cluster calico-013000
	I0530 13:16:53.625099    8290 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:16:53.625120    8290 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:16:53.625135    8290 cache.go:57] Caching tarball of preloaded images
	I0530 13:16:53.625199    8290 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:16:53.625205    8290 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:16:53.625261    8290 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/calico-013000/config.json ...
	I0530 13:16:53.625272    8290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/calico-013000/config.json: {Name:mkbd324c96b14dfc58e30fde970fbdf95788efb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:16:53.625496    8290 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:16:53.625510    8290 start.go:364] acquiring machines lock for calico-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:16:53.625537    8290 start.go:368] acquired machines lock for "calico-013000" in 22.958µs
	I0530 13:16:53.625550    8290 start.go:93] Provisioning new machine with config: &{Name:calico-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:16:53.625578    8290 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:16:53.634088    8290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:16:53.650493    8290 start.go:159] libmachine.API.Create for "calico-013000" (driver="qemu2")
	I0530 13:16:53.650512    8290 client.go:168] LocalClient.Create starting
	I0530 13:16:53.650563    8290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:16:53.650584    8290 main.go:141] libmachine: Decoding PEM data...
	I0530 13:16:53.650598    8290 main.go:141] libmachine: Parsing certificate...
	I0530 13:16:53.650619    8290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:16:53.650634    8290 main.go:141] libmachine: Decoding PEM data...
	I0530 13:16:53.650641    8290 main.go:141] libmachine: Parsing certificate...
	I0530 13:16:53.650968    8290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:16:53.765654    8290 main.go:141] libmachine: Creating SSH key...
	I0530 13:16:53.835905    8290 main.go:141] libmachine: Creating Disk image...
	I0530 13:16:53.835912    8290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:16:53.836062    8290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2
	I0530 13:16:53.844565    8290 main.go:141] libmachine: STDOUT: 
	I0530 13:16:53.844580    8290 main.go:141] libmachine: STDERR: 
	I0530 13:16:53.844634    8290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2 +20000M
	I0530 13:16:53.851721    8290 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:16:53.851733    8290 main.go:141] libmachine: STDERR: 
	I0530 13:16:53.851756    8290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2
	I0530 13:16:53.851763    8290 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:16:53.851801    8290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:ca:f7:a9:e9:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2
	I0530 13:16:53.853351    8290 main.go:141] libmachine: STDOUT: 
	I0530 13:16:53.853365    8290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:16:53.853384    8290 client.go:171] LocalClient.Create took 202.872625ms
	I0530 13:16:55.855493    8290 start.go:128] duration metric: createHost completed in 2.229946666s
	I0530 13:16:55.855553    8290 start.go:83] releasing machines lock for "calico-013000", held for 2.230055792s
	W0530 13:16:55.855638    8290 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:16:55.868090    8290 out.go:177] * Deleting "calico-013000" in qemu2 ...
	W0530 13:16:55.890463    8290 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:16:55.890496    8290 start.go:702] Will try again in 5 seconds ...
	I0530 13:17:00.892684    8290 start.go:364] acquiring machines lock for calico-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:00.893217    8290 start.go:368] acquired machines lock for "calico-013000" in 419.583µs
	I0530 13:17:00.893344    8290 start.go:93] Provisioning new machine with config: &{Name:calico-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:00.893630    8290 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:00.903432    8290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:00.951020    8290 start.go:159] libmachine.API.Create for "calico-013000" (driver="qemu2")
	I0530 13:17:00.951081    8290 client.go:168] LocalClient.Create starting
	I0530 13:17:00.951219    8290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:00.951268    8290 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:00.951291    8290 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:00.951396    8290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:00.951428    8290 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:00.951448    8290 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:00.951977    8290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:01.078434    8290 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:01.215568    8290 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:01.215574    8290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:01.215730    8290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2
	I0530 13:17:01.224592    8290 main.go:141] libmachine: STDOUT: 
	I0530 13:17:01.224609    8290 main.go:141] libmachine: STDERR: 
	I0530 13:17:01.224665    8290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2 +20000M
	I0530 13:17:01.231758    8290 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:01.231771    8290 main.go:141] libmachine: STDERR: 
	I0530 13:17:01.231792    8290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2
	I0530 13:17:01.231799    8290 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:01.231840    8290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:fa:60:af:90:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/calico-013000/disk.qcow2
	I0530 13:17:01.233308    8290 main.go:141] libmachine: STDOUT: 
	I0530 13:17:01.233321    8290 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:01.233355    8290 client.go:171] LocalClient.Create took 282.273125ms
	I0530 13:17:03.235497    8290 start.go:128] duration metric: createHost completed in 2.341895417s
	I0530 13:17:03.235559    8290 start.go:83] releasing machines lock for "calico-013000", held for 2.342364666s
	W0530 13:17:03.236307    8290 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:03.245925    8290 out.go:177] 
	W0530 13:17:03.250097    8290 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:17:03.250149    8290 out.go:239] * 
	* 
	W0530 13:17:03.252769    8290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:17:03.260964    8290 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.7400205s)

                                                
                                                
-- stdout --
	* [custom-flannel-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-013000 in cluster custom-flannel-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:17:05.649358    8407 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:17:05.649480    8407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:05.649483    8407 out.go:309] Setting ErrFile to fd 2...
	I0530 13:17:05.649485    8407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:05.649557    8407 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:17:05.650614    8407 out.go:303] Setting JSON to false
	I0530 13:17:05.665920    8407 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4596,"bootTime":1685473229,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:17:05.665983    8407 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:17:05.670824    8407 out.go:177] * [custom-flannel-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:17:05.677642    8407 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:17:05.677681    8407 notify.go:220] Checking for updates...
	I0530 13:17:05.684547    8407 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:17:05.687740    8407 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:17:05.690695    8407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:17:05.693536    8407 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:17:05.696618    8407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:17:05.699956    8407 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:17:05.699980    8407 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:17:05.703606    8407 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:17:05.715663    8407 start.go:295] selected driver: qemu2
	I0530 13:17:05.715677    8407 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:17:05.715684    8407 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:17:05.717594    8407 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:17:05.720620    8407 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:17:05.723691    8407 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:17:05.723706    8407 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0530 13:17:05.723721    8407 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0530 13:17:05.723726    8407 start_flags.go:319] config:
	{Name:custom-flannel-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP:}
	I0530 13:17:05.723794    8407 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:17:05.732558    8407 out.go:177] * Starting control plane node custom-flannel-013000 in cluster custom-flannel-013000
	I0530 13:17:05.736657    8407 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:17:05.736679    8407 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:17:05.736691    8407 cache.go:57] Caching tarball of preloaded images
	I0530 13:17:05.736757    8407 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:17:05.736762    8407 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:17:05.736817    8407 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/custom-flannel-013000/config.json ...
	I0530 13:17:05.736829    8407 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/custom-flannel-013000/config.json: {Name:mk80b86fec9e9251be93eb7f18702165052d412a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:17:05.737027    8407 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:17:05.737043    8407 start.go:364] acquiring machines lock for custom-flannel-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:05.737073    8407 start.go:368] acquired machines lock for "custom-flannel-013000" in 26.041µs
	I0530 13:17:05.737089    8407 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:05.737117    8407 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:05.745596    8407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:05.762800    8407 start.go:159] libmachine.API.Create for "custom-flannel-013000" (driver="qemu2")
	I0530 13:17:05.762823    8407 client.go:168] LocalClient.Create starting
	I0530 13:17:05.762887    8407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:05.762917    8407 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:05.762928    8407 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:05.762970    8407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:05.762986    8407 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:05.762996    8407 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:05.763352    8407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:05.881118    8407 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:05.998155    8407 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:05.998161    8407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:05.998303    8407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2
	I0530 13:17:06.007111    8407 main.go:141] libmachine: STDOUT: 
	I0530 13:17:06.007123    8407 main.go:141] libmachine: STDERR: 
	I0530 13:17:06.007183    8407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2 +20000M
	I0530 13:17:06.014434    8407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:06.014452    8407 main.go:141] libmachine: STDERR: 
	I0530 13:17:06.014478    8407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2
	I0530 13:17:06.014482    8407 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:06.014521    8407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:eb:81:7e:ae:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2
	I0530 13:17:06.016115    8407 main.go:141] libmachine: STDOUT: 
	I0530 13:17:06.016128    8407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:06.016149    8407 client.go:171] LocalClient.Create took 253.327708ms
	I0530 13:17:08.018325    8407 start.go:128] duration metric: createHost completed in 2.281232042s
	I0530 13:17:08.018417    8407 start.go:83] releasing machines lock for "custom-flannel-013000", held for 2.28138425s
	W0530 13:17:08.018483    8407 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:08.029925    8407 out.go:177] * Deleting "custom-flannel-013000" in qemu2 ...
	W0530 13:17:08.051525    8407 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:08.051550    8407 start.go:702] Will try again in 5 seconds ...
	I0530 13:17:13.053728    8407 start.go:364] acquiring machines lock for custom-flannel-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:13.054378    8407 start.go:368] acquired machines lock for "custom-flannel-013000" in 497.583µs
	I0530 13:17:13.054494    8407 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:13.054744    8407 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:13.064669    8407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:13.113551    8407 start.go:159] libmachine.API.Create for "custom-flannel-013000" (driver="qemu2")
	I0530 13:17:13.113591    8407 client.go:168] LocalClient.Create starting
	I0530 13:17:13.113705    8407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:13.113744    8407 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:13.113762    8407 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:13.113842    8407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:13.113869    8407 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:13.113884    8407 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:13.114397    8407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:13.243017    8407 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:13.300901    8407 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:13.300906    8407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:13.301055    8407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2
	I0530 13:17:13.309487    8407 main.go:141] libmachine: STDOUT: 
	I0530 13:17:13.309500    8407 main.go:141] libmachine: STDERR: 
	I0530 13:17:13.309565    8407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2 +20000M
	I0530 13:17:13.316730    8407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:13.316746    8407 main.go:141] libmachine: STDERR: 
	I0530 13:17:13.316762    8407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2
	I0530 13:17:13.316767    8407 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:13.316806    8407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:fc:9e:af:31:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/custom-flannel-013000/disk.qcow2
	I0530 13:17:13.318301    8407 main.go:141] libmachine: STDOUT: 
	I0530 13:17:13.318314    8407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:13.318340    8407 client.go:171] LocalClient.Create took 204.738083ms
	I0530 13:17:15.320457    8407 start.go:128] duration metric: createHost completed in 2.265717375s
	I0530 13:17:15.320525    8407 start.go:83] releasing machines lock for "custom-flannel-013000", held for 2.266172167s
	W0530 13:17:15.321177    8407 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:15.332842    8407 out.go:177] 
	W0530 13:17:15.335927    8407 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:17:15.335969    8407 out.go:239] * 
	* 
	W0530 13:17:15.338818    8407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:17:15.347735    8407 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p false-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.749719333s)

                                                
                                                
-- stdout --
	* [false-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-013000 in cluster false-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:17:17.715827    8524 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:17:17.715957    8524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:17.715960    8524 out.go:309] Setting ErrFile to fd 2...
	I0530 13:17:17.715963    8524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:17.716034    8524 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:17:17.717090    8524 out.go:303] Setting JSON to false
	I0530 13:17:17.732152    8524 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4608,"bootTime":1685473229,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:17:17.732213    8524 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:17:17.741557    8524 out.go:177] * [false-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:17:17.745651    8524 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:17:17.745687    8524 notify.go:220] Checking for updates...
	I0530 13:17:17.751588    8524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:17:17.754720    8524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:17:17.756117    8524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:17:17.759599    8524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:17:17.762648    8524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:17:17.765904    8524 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:17:17.765926    8524 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:17:17.770524    8524 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:17:17.777621    8524 start.go:295] selected driver: qemu2
	I0530 13:17:17.777629    8524 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:17:17.777637    8524 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:17:17.779505    8524 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:17:17.782641    8524 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:17:17.785706    8524 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:17:17.785728    8524 cni.go:84] Creating CNI manager for "false"
	I0530 13:17:17.785735    8524 start_flags.go:319] config:
	{Name:false-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:false-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:17:17.785837    8524 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:17:17.794622    8524 out.go:177] * Starting control plane node false-013000 in cluster false-013000
	I0530 13:17:17.798624    8524 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:17:17.798647    8524 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:17:17.798682    8524 cache.go:57] Caching tarball of preloaded images
	I0530 13:17:17.798738    8524 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:17:17.798743    8524 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:17:17.798798    8524 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/false-013000/config.json ...
	I0530 13:17:17.798810    8524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/false-013000/config.json: {Name:mkc635cef4e8810c9b971b090fbc93d3ee181062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:17:17.799004    8524 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:17:17.799019    8524 start.go:364] acquiring machines lock for false-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:17.799048    8524 start.go:368] acquired machines lock for "false-013000" in 23.458µs
	I0530 13:17:17.799062    8524 start.go:93] Provisioning new machine with config: &{Name:false-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:17.799087    8524 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:17.807668    8524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:17.824385    8524 start.go:159] libmachine.API.Create for "false-013000" (driver="qemu2")
	I0530 13:17:17.824406    8524 client.go:168] LocalClient.Create starting
	I0530 13:17:17.824464    8524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:17.824486    8524 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:17.824496    8524 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:17.824516    8524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:17.824531    8524 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:17.824539    8524 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:17.824864    8524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:17.962059    8524 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:18.099121    8524 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:18.099129    8524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:18.099288    8524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2
	I0530 13:17:18.108149    8524 main.go:141] libmachine: STDOUT: 
	I0530 13:17:18.108162    8524 main.go:141] libmachine: STDERR: 
	I0530 13:17:18.108226    8524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2 +20000M
	I0530 13:17:18.115562    8524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:18.115573    8524 main.go:141] libmachine: STDERR: 
	I0530 13:17:18.115588    8524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2
	I0530 13:17:18.115597    8524 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:18.115636    8524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:ce:be:ca:5c:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2
	I0530 13:17:18.117165    8524 main.go:141] libmachine: STDOUT: 
	I0530 13:17:18.117175    8524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:18.117195    8524 client.go:171] LocalClient.Create took 292.789834ms
	I0530 13:17:20.119353    8524 start.go:128] duration metric: createHost completed in 2.320294958s
	I0530 13:17:20.119409    8524 start.go:83] releasing machines lock for "false-013000", held for 2.320403875s
	W0530 13:17:20.119480    8524 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:20.131808    8524 out.go:177] * Deleting "false-013000" in qemu2 ...
	W0530 13:17:20.151951    8524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:20.151984    8524 start.go:702] Will try again in 5 seconds ...
	I0530 13:17:25.154209    8524 start.go:364] acquiring machines lock for false-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:25.154712    8524 start.go:368] acquired machines lock for "false-013000" in 388.167µs
	I0530 13:17:25.154874    8524 start.go:93] Provisioning new machine with config: &{Name:false-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:25.155187    8524 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:25.161252    8524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:25.209682    8524 start.go:159] libmachine.API.Create for "false-013000" (driver="qemu2")
	I0530 13:17:25.209721    8524 client.go:168] LocalClient.Create starting
	I0530 13:17:25.209867    8524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:25.209906    8524 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:25.209922    8524 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:25.210010    8524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:25.210037    8524 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:25.210052    8524 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:25.210599    8524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:25.337052    8524 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:25.376732    8524 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:25.376738    8524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:25.376924    8524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2
	I0530 13:17:25.385580    8524 main.go:141] libmachine: STDOUT: 
	I0530 13:17:25.385592    8524 main.go:141] libmachine: STDERR: 
	I0530 13:17:25.385653    8524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2 +20000M
	I0530 13:17:25.392766    8524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:25.392777    8524 main.go:141] libmachine: STDERR: 
	I0530 13:17:25.392788    8524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2
	I0530 13:17:25.392795    8524 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:25.392837    8524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f8:e9:10:6e:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/false-013000/disk.qcow2
	I0530 13:17:25.394304    8524 main.go:141] libmachine: STDOUT: 
	I0530 13:17:25.394316    8524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:25.394328    8524 client.go:171] LocalClient.Create took 184.605625ms
	I0530 13:17:27.396469    8524 start.go:128] duration metric: createHost completed in 2.241277917s
	I0530 13:17:27.396564    8524 start.go:83] releasing machines lock for "false-013000", held for 2.2418765s
	W0530 13:17:27.397248    8524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:27.407954    8524 out.go:177] 
	W0530 13:17:27.412036    8524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:17:27.412062    8524 out.go:239] * 
	* 
	W0530 13:17:27.414948    8524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:17:27.423933    8524 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.627501042s)

                                                
                                                
-- stdout --
	* [kindnet-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-013000 in cluster kindnet-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:17:29.609385    8634 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:17:29.609517    8634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:29.609520    8634 out.go:309] Setting ErrFile to fd 2...
	I0530 13:17:29.609523    8634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:29.609595    8634 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:17:29.610617    8634 out.go:303] Setting JSON to false
	I0530 13:17:29.626050    8634 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4620,"bootTime":1685473229,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:17:29.626116    8634 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:17:29.630414    8634 out.go:177] * [kindnet-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:17:29.637369    8634 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:17:29.637405    8634 notify.go:220] Checking for updates...
	I0530 13:17:29.644415    8634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:17:29.647473    8634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:17:29.650443    8634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:17:29.653442    8634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:17:29.656432    8634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:17:29.658003    8634 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:17:29.658027    8634 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:17:29.662380    8634 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:17:29.669235    8634 start.go:295] selected driver: qemu2
	I0530 13:17:29.669242    8634 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:17:29.669250    8634 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:17:29.671177    8634 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:17:29.674433    8634 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:17:29.677552    8634 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:17:29.677572    8634 cni.go:84] Creating CNI manager for "kindnet"
	I0530 13:17:29.677576    8634 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0530 13:17:29.677590    8634 start_flags.go:319] config:
	{Name:kindnet-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:17:29.677673    8634 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:17:29.686465    8634 out.go:177] * Starting control plane node kindnet-013000 in cluster kindnet-013000
	I0530 13:17:29.690376    8634 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:17:29.690404    8634 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:17:29.690424    8634 cache.go:57] Caching tarball of preloaded images
	I0530 13:17:29.690491    8634 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:17:29.690496    8634 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:17:29.690548    8634 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kindnet-013000/config.json ...
	I0530 13:17:29.690564    8634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kindnet-013000/config.json: {Name:mk5c3b3470961ae4de9db4c7c723bc327323c057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:17:29.690768    8634 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:17:29.690783    8634 start.go:364] acquiring machines lock for kindnet-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:29.690813    8634 start.go:368] acquired machines lock for "kindnet-013000" in 24.917µs
	I0530 13:17:29.690827    8634 start.go:93] Provisioning new machine with config: &{Name:kindnet-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:29.690853    8634 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:29.699479    8634 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:29.716442    8634 start.go:159] libmachine.API.Create for "kindnet-013000" (driver="qemu2")
	I0530 13:17:29.716458    8634 client.go:168] LocalClient.Create starting
	I0530 13:17:29.716518    8634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:29.716539    8634 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:29.716546    8634 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:29.716569    8634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:29.716584    8634 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:29.716592    8634 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:29.716913    8634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:29.832429    8634 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:29.877433    8634 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:29.877438    8634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:29.877962    8634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2
	I0530 13:17:29.887380    8634 main.go:141] libmachine: STDOUT: 
	I0530 13:17:29.887404    8634 main.go:141] libmachine: STDERR: 
	I0530 13:17:29.887458    8634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2 +20000M
	I0530 13:17:29.894742    8634 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:29.894755    8634 main.go:141] libmachine: STDERR: 
	I0530 13:17:29.894777    8634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2
	I0530 13:17:29.894785    8634 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:29.894823    8634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:15:11:82:a1:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2
	I0530 13:17:29.896336    8634 main.go:141] libmachine: STDOUT: 
	I0530 13:17:29.896348    8634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:29.896371    8634 client.go:171] LocalClient.Create took 179.912084ms
	I0530 13:17:31.898532    8634 start.go:128] duration metric: createHost completed in 2.207711416s
	I0530 13:17:31.898580    8634 start.go:83] releasing machines lock for "kindnet-013000", held for 2.207806584s
	W0530 13:17:31.898637    8634 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:31.907112    8634 out.go:177] * Deleting "kindnet-013000" in qemu2 ...
	W0530 13:17:31.928113    8634 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:31.928142    8634 start.go:702] Will try again in 5 seconds ...
	I0530 13:17:36.930326    8634 start.go:364] acquiring machines lock for kindnet-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:36.930905    8634 start.go:368] acquired machines lock for "kindnet-013000" in 454.5µs
	I0530 13:17:36.931048    8634 start.go:93] Provisioning new machine with config: &{Name:kindnet-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:36.931291    8634 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:36.941123    8634 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:36.989427    8634 start.go:159] libmachine.API.Create for "kindnet-013000" (driver="qemu2")
	I0530 13:17:36.989467    8634 client.go:168] LocalClient.Create starting
	I0530 13:17:36.989613    8634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:36.989658    8634 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:36.989672    8634 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:36.989790    8634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:36.989820    8634 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:36.989837    8634 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:36.990388    8634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:37.117561    8634 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:37.148819    8634 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:37.148824    8634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:37.148973    8634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2
	I0530 13:17:37.157613    8634 main.go:141] libmachine: STDOUT: 
	I0530 13:17:37.157627    8634 main.go:141] libmachine: STDERR: 
	I0530 13:17:37.157692    8634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2 +20000M
	I0530 13:17:37.164909    8634 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:37.164922    8634 main.go:141] libmachine: STDERR: 
	I0530 13:17:37.164937    8634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2
	I0530 13:17:37.164944    8634 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:37.164982    8634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:d8:33:92:c5:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kindnet-013000/disk.qcow2
	I0530 13:17:37.166543    8634 main.go:141] libmachine: STDOUT: 
	I0530 13:17:37.166554    8634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:37.166571    8634 client.go:171] LocalClient.Create took 177.097458ms
	I0530 13:17:39.168710    8634 start.go:128] duration metric: createHost completed in 2.237444s
	I0530 13:17:39.168786    8634 start.go:83] releasing machines lock for "kindnet-013000", held for 2.237881959s
	W0530 13:17:39.169433    8634 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:39.179081    8634 out.go:177] 
	W0530 13:17:39.184220    8634 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:17:39.184259    8634 out.go:239] * 
	* 
	W0530 13:17:39.187365    8634 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:17:39.197093    8634 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.831280709s)

                                                
                                                
-- stdout --
	* [flannel-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-013000 in cluster flannel-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:17:41.425834    8749 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:17:41.425970    8749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:41.425972    8749 out.go:309] Setting ErrFile to fd 2...
	I0530 13:17:41.425974    8749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:41.426041    8749 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:17:41.427098    8749 out.go:303] Setting JSON to false
	I0530 13:17:41.442266    8749 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4632,"bootTime":1685473229,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:17:41.442323    8749 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:17:41.446312    8749 out.go:177] * [flannel-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:17:41.453334    8749 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:17:41.453394    8749 notify.go:220] Checking for updates...
	I0530 13:17:41.460287    8749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:17:41.463344    8749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:17:41.466316    8749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:17:41.469332    8749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:17:41.472340    8749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:17:41.473940    8749 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:17:41.473959    8749 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:17:41.478260    8749 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:17:41.485120    8749 start.go:295] selected driver: qemu2
	I0530 13:17:41.485127    8749 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:17:41.485133    8749 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:17:41.487010    8749 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:17:41.490340    8749 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:17:41.493387    8749 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:17:41.493405    8749 cni.go:84] Creating CNI manager for "flannel"
	I0530 13:17:41.493408    8749 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0530 13:17:41.493416    8749 start_flags.go:319] config:
	{Name:flannel-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:flannel-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:17:41.493487    8749 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:17:41.498255    8749 out.go:177] * Starting control plane node flannel-013000 in cluster flannel-013000
	I0530 13:17:41.506312    8749 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:17:41.506349    8749 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:17:41.506367    8749 cache.go:57] Caching tarball of preloaded images
	I0530 13:17:41.506439    8749 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:17:41.506445    8749 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:17:41.506506    8749 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/flannel-013000/config.json ...
	I0530 13:17:41.506524    8749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/flannel-013000/config.json: {Name:mkd703dc14ee08ad1807d340634f76a02e218436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:17:41.506728    8749 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:17:41.506744    8749 start.go:364] acquiring machines lock for flannel-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:41.506775    8749 start.go:368] acquired machines lock for "flannel-013000" in 25.459µs
	I0530 13:17:41.506789    8749 start.go:93] Provisioning new machine with config: &{Name:flannel-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:41.506846    8749 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:41.515362    8749 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:41.532496    8749 start.go:159] libmachine.API.Create for "flannel-013000" (driver="qemu2")
	I0530 13:17:41.532511    8749 client.go:168] LocalClient.Create starting
	I0530 13:17:41.532584    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:41.532612    8749 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:41.532621    8749 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:41.532665    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:41.532679    8749 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:41.532689    8749 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:41.533031    8749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:41.648150    8749 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:41.849275    8749 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:41.849284    8749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:41.849471    8749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2
	I0530 13:17:41.858795    8749 main.go:141] libmachine: STDOUT: 
	I0530 13:17:41.858811    8749 main.go:141] libmachine: STDERR: 
	I0530 13:17:41.858866    8749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2 +20000M
	I0530 13:17:41.866118    8749 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:41.866130    8749 main.go:141] libmachine: STDERR: 
	I0530 13:17:41.866155    8749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2
	I0530 13:17:41.866161    8749 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:41.866206    8749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:f2:85:b3:a8:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2
	I0530 13:17:41.867705    8749 main.go:141] libmachine: STDOUT: 
	I0530 13:17:41.867720    8749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:41.867738    8749 client.go:171] LocalClient.Create took 335.230625ms
	I0530 13:17:43.869878    8749 start.go:128] duration metric: createHost completed in 2.363062416s
	I0530 13:17:43.869929    8749 start.go:83] releasing machines lock for "flannel-013000", held for 2.363197417s
	W0530 13:17:43.869987    8749 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:43.877585    8749 out.go:177] * Deleting "flannel-013000" in qemu2 ...
	W0530 13:17:43.898868    8749 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:43.898896    8749 start.go:702] Will try again in 5 seconds ...
	I0530 13:17:48.901071    8749 start.go:364] acquiring machines lock for flannel-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:48.901668    8749 start.go:368] acquired machines lock for "flannel-013000" in 462.792µs
	I0530 13:17:48.901809    8749 start.go:93] Provisioning new machine with config: &{Name:flannel-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:48.902141    8749 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:48.907167    8749 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:48.953262    8749 start.go:159] libmachine.API.Create for "flannel-013000" (driver="qemu2")
	I0530 13:17:48.953344    8749 client.go:168] LocalClient.Create starting
	I0530 13:17:48.953463    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:48.953506    8749 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:48.953524    8749 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:48.953613    8749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:48.953640    8749 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:48.953656    8749 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:48.954161    8749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:49.080837    8749 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:49.168514    8749 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:49.168520    8749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:49.168669    8749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2
	I0530 13:17:49.177183    8749 main.go:141] libmachine: STDOUT: 
	I0530 13:17:49.177197    8749 main.go:141] libmachine: STDERR: 
	I0530 13:17:49.177244    8749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2 +20000M
	I0530 13:17:49.185860    8749 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:49.185878    8749 main.go:141] libmachine: STDERR: 
	I0530 13:17:49.185889    8749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2
	I0530 13:17:49.185895    8749 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:49.185934    8749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:73:01:64:bb:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/flannel-013000/disk.qcow2
	I0530 13:17:49.187530    8749 main.go:141] libmachine: STDOUT: 
	I0530 13:17:49.187543    8749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:49.187559    8749 client.go:171] LocalClient.Create took 234.214375ms
	I0530 13:17:51.189672    8749 start.go:128] duration metric: createHost completed in 2.287557083s
	I0530 13:17:51.189740    8749 start.go:83] releasing machines lock for "flannel-013000", held for 2.288100416s
	W0530 13:17:51.190322    8749 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:51.199963    8749 out.go:177] 
	W0530 13:17:51.204142    8749 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:17:51.204167    8749 out.go:239] * 
	* 
	W0530 13:17:51.206830    8749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:17:51.215966    8749 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.750079792s)

                                                
                                                
-- stdout --
	* [enable-default-cni-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-013000 in cluster enable-default-cni-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:17:53.530131    8866 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:17:53.530280    8866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:53.530283    8866 out.go:309] Setting ErrFile to fd 2...
	I0530 13:17:53.530285    8866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:17:53.530352    8866 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:17:53.531393    8866 out.go:303] Setting JSON to false
	I0530 13:17:53.546523    8866 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4644,"bootTime":1685473229,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:17:53.546583    8866 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:17:53.550525    8866 out.go:177] * [enable-default-cni-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:17:53.557659    8866 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:17:53.557650    8866 notify.go:220] Checking for updates...
	I0530 13:17:53.564567    8866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:17:53.567624    8866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:17:53.570645    8866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:17:53.573617    8866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:17:53.576647    8866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:17:53.578200    8866 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:17:53.578219    8866 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:17:53.582558    8866 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:17:53.589383    8866 start.go:295] selected driver: qemu2
	I0530 13:17:53.589389    8866 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:17:53.589395    8866 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:17:53.591210    8866 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:17:53.594544    8866 out.go:177] * Automatically selected the socket_vmnet network
	E0530 13:17:53.597692    8866 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0530 13:17:53.597700    8866 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:17:53.597719    8866 cni.go:84] Creating CNI manager for "bridge"
	I0530 13:17:53.597723    8866 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:17:53.597729    8866 start_flags.go:319] config:
	{Name:enable-default-cni-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP:}
	I0530 13:17:53.597804    8866 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:17:53.606605    8866 out.go:177] * Starting control plane node enable-default-cni-013000 in cluster enable-default-cni-013000
	I0530 13:17:53.610641    8866 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:17:53.610670    8866 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:17:53.610684    8866 cache.go:57] Caching tarball of preloaded images
	I0530 13:17:53.610755    8866 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:17:53.610768    8866 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:17:53.610820    8866 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/enable-default-cni-013000/config.json ...
	I0530 13:17:53.610837    8866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/enable-default-cni-013000/config.json: {Name:mka50f1191572e33a8423bd264b2e78da334fc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:17:53.611035    8866 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:17:53.611052    8866 start.go:364] acquiring machines lock for enable-default-cni-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:17:53.611082    8866 start.go:368] acquired machines lock for "enable-default-cni-013000" in 25.125µs
	I0530 13:17:53.611096    8866 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:17:53.611132    8866 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:17:53.619629    8866 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:17:53.636332    8866 start.go:159] libmachine.API.Create for "enable-default-cni-013000" (driver="qemu2")
	I0530 13:17:53.636370    8866 client.go:168] LocalClient.Create starting
	I0530 13:17:53.636440    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:17:53.636465    8866 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:53.636475    8866 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:53.636517    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:17:53.636533    8866 main.go:141] libmachine: Decoding PEM data...
	I0530 13:17:53.636542    8866 main.go:141] libmachine: Parsing certificate...
	I0530 13:17:53.636896    8866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:17:53.751895    8866 main.go:141] libmachine: Creating SSH key...
	I0530 13:17:53.823422    8866 main.go:141] libmachine: Creating Disk image...
	I0530 13:17:53.823428    8866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:17:53.823580    8866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2
	I0530 13:17:53.832221    8866 main.go:141] libmachine: STDOUT: 
	I0530 13:17:53.832233    8866 main.go:141] libmachine: STDERR: 
	I0530 13:17:53.832282    8866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2 +20000M
	I0530 13:17:53.839390    8866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:17:53.839405    8866 main.go:141] libmachine: STDERR: 
	I0530 13:17:53.839418    8866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2
	I0530 13:17:53.839427    8866 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:17:53.839467    8866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:19:83:fe:54:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2
	I0530 13:17:53.841003    8866 main.go:141] libmachine: STDOUT: 
	I0530 13:17:53.841015    8866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:17:53.841032    8866 client.go:171] LocalClient.Create took 204.659ms
	I0530 13:17:55.843139    8866 start.go:128] duration metric: createHost completed in 2.232036083s
	I0530 13:17:55.843470    8866 start.go:83] releasing machines lock for "enable-default-cni-013000", held for 2.232415792s
	W0530 13:17:55.843542    8866 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:55.856940    8866 out.go:177] * Deleting "enable-default-cni-013000" in qemu2 ...
	W0530 13:17:55.877912    8866 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:17:55.877933    8866 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:00.880050    8866 start.go:364] acquiring machines lock for enable-default-cni-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:00.880596    8866 start.go:368] acquired machines lock for "enable-default-cni-013000" in 420µs
	I0530 13:18:00.880741    8866 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:00.880995    8866 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:00.892967    8866 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:18:00.940393    8866 start.go:159] libmachine.API.Create for "enable-default-cni-013000" (driver="qemu2")
	I0530 13:18:00.940436    8866 client.go:168] LocalClient.Create starting
	I0530 13:18:00.940558    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:00.940599    8866 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:00.940627    8866 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:00.940708    8866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:00.940735    8866 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:00.940750    8866 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:00.941247    8866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:01.067475    8866 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:01.193866    8866 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:01.193872    8866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:01.194022    8866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2
	I0530 13:18:01.202932    8866 main.go:141] libmachine: STDOUT: 
	I0530 13:18:01.202952    8866 main.go:141] libmachine: STDERR: 
	I0530 13:18:01.203027    8866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2 +20000M
	I0530 13:18:01.210142    8866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:01.210154    8866 main.go:141] libmachine: STDERR: 
	I0530 13:18:01.210172    8866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2
	I0530 13:18:01.210177    8866 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:01.210210    8866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:45:c8:cc:07:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/enable-default-cni-013000/disk.qcow2
	I0530 13:18:01.211718    8866 main.go:141] libmachine: STDOUT: 
	I0530 13:18:01.211732    8866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:01.211744    8866 client.go:171] LocalClient.Create took 271.308875ms
	I0530 13:18:03.213919    8866 start.go:128] duration metric: createHost completed in 2.332931417s
	I0530 13:18:03.213980    8866 start.go:83] releasing machines lock for "enable-default-cni-013000", held for 2.333403875s
	W0530 13:18:03.214702    8866 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:03.225156    8866 out.go:177] 
	W0530 13:18:03.230449    8866 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:03.230512    8866 out.go:239] * 
	* 
	W0530 13:18:03.233193    8866 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:03.239281    8866 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.747380458s)

                                                
                                                
-- stdout --
	* [bridge-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-013000 in cluster bridge-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:05.375641    8976 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:05.375774    8976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:05.375777    8976 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:05.375780    8976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:05.375846    8976 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:05.376873    8976 out.go:303] Setting JSON to false
	I0530 13:18:05.392282    8976 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4656,"bootTime":1685473229,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:05.392339    8976 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:05.396310    8976 out.go:177] * [bridge-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:05.399321    8976 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:05.399422    8976 notify.go:220] Checking for updates...
	I0530 13:18:05.402157    8976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:05.406270    8976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:05.409268    8976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:05.412241    8976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:05.415231    8976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:05.418517    8976 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:05.418535    8976 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:05.422212    8976 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:05.429228    8976 start.go:295] selected driver: qemu2
	I0530 13:18:05.429238    8976 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:05.429250    8976 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:05.431093    8976 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:05.432589    8976 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:05.435313    8976 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:05.435331    8976 cni.go:84] Creating CNI manager for "bridge"
	I0530 13:18:05.435336    8976 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:18:05.435345    8976 start_flags.go:319] config:
	{Name:bridge-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:bridge-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:05.435434    8976 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:05.444110    8976 out.go:177] * Starting control plane node bridge-013000 in cluster bridge-013000
	I0530 13:18:05.448181    8976 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:05.448202    8976 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:05.448217    8976 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:05.448280    8976 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:05.448285    8976 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:05.448349    8976 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/bridge-013000/config.json ...
	I0530 13:18:05.448364    8976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/bridge-013000/config.json: {Name:mk428d110fde068b0aecf923b42131552bb0f8cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:05.448558    8976 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:05.448572    8976 start.go:364] acquiring machines lock for bridge-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:05.448600    8976 start.go:368] acquired machines lock for "bridge-013000" in 23.584µs
	I0530 13:18:05.448612    8976 start.go:93] Provisioning new machine with config: &{Name:bridge-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:05.448635    8976 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:05.457261    8976 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:18:05.473157    8976 start.go:159] libmachine.API.Create for "bridge-013000" (driver="qemu2")
	I0530 13:18:05.473181    8976 client.go:168] LocalClient.Create starting
	I0530 13:18:05.473246    8976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:05.473266    8976 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:05.473276    8976 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:05.473328    8976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:05.473344    8976 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:05.473351    8976 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:05.473666    8976 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:05.589526    8976 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:05.726239    8976 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:05.726249    8976 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:05.726412    8976 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2
	I0530 13:18:05.735273    8976 main.go:141] libmachine: STDOUT: 
	I0530 13:18:05.735294    8976 main.go:141] libmachine: STDERR: 
	I0530 13:18:05.735358    8976 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2 +20000M
	I0530 13:18:05.742615    8976 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:05.742628    8976 main.go:141] libmachine: STDERR: 
	I0530 13:18:05.742648    8976 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2
	I0530 13:18:05.742661    8976 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:05.742697    8976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:24:4e:6b:cf:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2
	I0530 13:18:05.744191    8976 main.go:141] libmachine: STDOUT: 
	I0530 13:18:05.744204    8976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:05.744223    8976 client.go:171] LocalClient.Create took 271.043667ms
	I0530 13:18:07.746419    8976 start.go:128] duration metric: createHost completed in 2.297808125s
	I0530 13:18:07.746491    8976 start.go:83] releasing machines lock for "bridge-013000", held for 2.297931916s
	W0530 13:18:07.746560    8976 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:07.755200    8976 out.go:177] * Deleting "bridge-013000" in qemu2 ...
	W0530 13:18:07.775790    8976 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:07.775822    8976 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:12.778000    8976 start.go:364] acquiring machines lock for bridge-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:12.778449    8976 start.go:368] acquired machines lock for "bridge-013000" in 338.875µs
	I0530 13:18:12.778561    8976 start.go:93] Provisioning new machine with config: &{Name:bridge-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:12.778845    8976 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:12.789666    8976 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:18:12.837197    8976 start.go:159] libmachine.API.Create for "bridge-013000" (driver="qemu2")
	I0530 13:18:12.837251    8976 client.go:168] LocalClient.Create starting
	I0530 13:18:12.837398    8976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:12.837452    8976 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:12.837480    8976 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:12.837562    8976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:12.837595    8976 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:12.837615    8976 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:12.838145    8976 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:12.977085    8976 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:13.034703    8976 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:13.034709    8976 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:13.034865    8976 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2
	I0530 13:18:13.043671    8976 main.go:141] libmachine: STDOUT: 
	I0530 13:18:13.043691    8976 main.go:141] libmachine: STDERR: 
	I0530 13:18:13.043749    8976 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2 +20000M
	I0530 13:18:13.050877    8976 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:13.050892    8976 main.go:141] libmachine: STDERR: 
	I0530 13:18:13.050903    8976 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2
	I0530 13:18:13.050908    8976 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:13.050947    8976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b9:b4:24:06:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/bridge-013000/disk.qcow2
	I0530 13:18:13.052477    8976 main.go:141] libmachine: STDOUT: 
	I0530 13:18:13.052489    8976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:13.052500    8976 client.go:171] LocalClient.Create took 215.248541ms
	I0530 13:18:15.054604    8976 start.go:128] duration metric: createHost completed in 2.275773709s
	I0530 13:18:15.054676    8976 start.go:83] releasing machines lock for "bridge-013000", held for 2.276254125s
	W0530 13:18:15.055281    8976 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:15.065960    8976 out.go:177] 
	W0530 13:18:15.069031    8976 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:15.069081    8976 out.go:239] * 
	* 
	W0530 13:18:15.071572    8976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:15.081921    8976 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-013000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.817526958s)

                                                
                                                
-- stdout --
	* [kubenet-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-013000 in cluster kubenet-013000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:17.196417    9087 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:17.196572    9087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:17.196575    9087 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:17.196578    9087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:17.196644    9087 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:17.197697    9087 out.go:303] Setting JSON to false
	I0530 13:18:17.212915    9087 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4668,"bootTime":1685473229,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:17.212994    9087 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:17.217522    9087 out.go:177] * [kubenet-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:17.224569    9087 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:17.224604    9087 notify.go:220] Checking for updates...
	I0530 13:18:17.231668    9087 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:17.234601    9087 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:17.237506    9087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:17.240592    9087 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:17.243590    9087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:17.246796    9087 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:17.246820    9087 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:17.251501    9087 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:17.258519    9087 start.go:295] selected driver: qemu2
	I0530 13:18:17.258525    9087 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:17.258531    9087 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:17.260438    9087 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:17.263525    9087 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:17.270683    9087 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:17.270702    9087 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0530 13:18:17.270706    9087 start_flags.go:319] config:
	{Name:kubenet-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:17.270790    9087 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:17.280649    9087 out.go:177] * Starting control plane node kubenet-013000 in cluster kubenet-013000
	I0530 13:18:17.284334    9087 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:17.284363    9087 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:17.284387    9087 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:17.284453    9087 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:17.284459    9087 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:17.284523    9087 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubenet-013000/config.json ...
	I0530 13:18:17.284535    9087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubenet-013000/config.json: {Name:mk40179f4243c65ce0e52d6efb4f745949c7cd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:17.284746    9087 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:17.284761    9087 start.go:364] acquiring machines lock for kubenet-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:17.284792    9087 start.go:368] acquired machines lock for "kubenet-013000" in 25.625µs
	I0530 13:18:17.284807    9087 start.go:93] Provisioning new machine with config: &{Name:kubenet-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:17.284844    9087 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:17.293384    9087 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:18:17.310052    9087 start.go:159] libmachine.API.Create for "kubenet-013000" (driver="qemu2")
	I0530 13:18:17.310072    9087 client.go:168] LocalClient.Create starting
	I0530 13:18:17.310138    9087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:17.310158    9087 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:17.310168    9087 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:17.310208    9087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:17.310222    9087 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:17.310228    9087 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:17.310750    9087 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:17.426951    9087 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:17.517479    9087 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:17.517484    9087 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:17.517635    9087 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:17.526129    9087 main.go:141] libmachine: STDOUT: 
	I0530 13:18:17.526142    9087 main.go:141] libmachine: STDERR: 
	I0530 13:18:17.526186    9087 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2 +20000M
	I0530 13:18:17.533350    9087 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:17.533368    9087 main.go:141] libmachine: STDERR: 
	I0530 13:18:17.533382    9087 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:17.533389    9087 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:17.533433    9087 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b8:b0:1f:fc:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:17.534943    9087 main.go:141] libmachine: STDOUT: 
	I0530 13:18:17.534957    9087 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:17.534976    9087 client.go:171] LocalClient.Create took 224.90475ms
	I0530 13:18:19.537089    9087 start.go:128] duration metric: createHost completed in 2.252277875s
	I0530 13:18:19.537217    9087 start.go:83] releasing machines lock for "kubenet-013000", held for 2.252409167s
	W0530 13:18:19.537288    9087 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:19.547831    9087 out.go:177] * Deleting "kubenet-013000" in qemu2 ...
	W0530 13:18:19.571679    9087 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:19.571711    9087 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:24.573838    9087 start.go:364] acquiring machines lock for kubenet-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:24.574270    9087 start.go:368] acquired machines lock for "kubenet-013000" in 318.458µs
	I0530 13:18:24.574385    9087 start.go:93] Provisioning new machine with config: &{Name:kubenet-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:24.574647    9087 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:24.580621    9087 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:18:24.628166    9087 start.go:159] libmachine.API.Create for "kubenet-013000" (driver="qemu2")
	I0530 13:18:24.628201    9087 client.go:168] LocalClient.Create starting
	I0530 13:18:24.628316    9087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:24.628363    9087 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:24.628379    9087 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:24.628458    9087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:24.628486    9087 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:24.628505    9087 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:24.628986    9087 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:24.757339    9087 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:24.927061    9087 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:24.927067    9087 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:24.927240    9087 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:24.936208    9087 main.go:141] libmachine: STDOUT: 
	I0530 13:18:24.936220    9087 main.go:141] libmachine: STDERR: 
	I0530 13:18:24.936285    9087 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2 +20000M
	I0530 13:18:24.943429    9087 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:24.943441    9087 main.go:141] libmachine: STDERR: 
	I0530 13:18:24.943453    9087 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:24.943458    9087 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:24.943506    9087 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:fa:21:1e:8d:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:24.945000    9087 main.go:141] libmachine: STDOUT: 
	I0530 13:18:24.945013    9087 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:24.945024    9087 client.go:171] LocalClient.Create took 316.826291ms
	I0530 13:18:26.947165    9087 start.go:128] duration metric: createHost completed in 2.372538708s
	I0530 13:18:26.947261    9087 start.go:83] releasing machines lock for "kubenet-013000", held for 2.373022125s
	W0530 13:18:26.947862    9087 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:26.959446    9087 out.go:177] 
	W0530 13:18:26.963456    9087 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:26.963487    9087 out.go:239] * 
	* 
	W0530 13:18:26.965990    9087 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:26.973323    9087 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe start -p stopped-upgrade-314000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe start -p stopped-upgrade-314000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe: permission denied (9.6795ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe start -p stopped-upgrade-314000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe start -p stopped-upgrade-314000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe: permission denied (9.333541ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe start -p stopped-upgrade-314000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe start -p stopped-upgrade-314000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe: permission denied (6.136625ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3473210259.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-314000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-314000: exit status 85 (114.164583ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000 sudo cat                | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000 sudo cat                | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000 sudo cat                | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-013000                         | enable-default-cni-013000 | jenkins | v1.30.1 | 30 May 23 13:18 PDT | 30 May 23 13:18 PDT |
	| start   | -p bridge-013000 --memory=3072                       | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo crictl                         | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo crictl                         | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo find                           | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo ip a s                         | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	| ssh     | -p bridge-013000 sudo ip r s                         | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo iptables                       | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo docker                         | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo cat                            | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo                                | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo find                           | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p bridge-013000 sudo crio                           | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p bridge-013000                                     | bridge-013000             | jenkins | v1.30.1 | 30 May 23 13:18 PDT | 30 May 23 13:18 PDT |
	| start   | -p kubenet-013000                                    | kubenet-013000            | jenkins | v1.30.1 | 30 May 23 13:18 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 13:18:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 13:18:17.196417    9087 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:17.196572    9087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:17.196575    9087 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:17.196578    9087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:17.196644    9087 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:17.197697    9087 out.go:303] Setting JSON to false
	I0530 13:18:17.212915    9087 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4668,"bootTime":1685473229,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:17.212994    9087 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:17.217522    9087 out.go:177] * [kubenet-013000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:17.224569    9087 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:17.224604    9087 notify.go:220] Checking for updates...
	I0530 13:18:17.231668    9087 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:17.234601    9087 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:17.237506    9087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:17.240592    9087 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:17.243590    9087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:17.246796    9087 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:17.246820    9087 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:17.251501    9087 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:17.258519    9087 start.go:295] selected driver: qemu2
	I0530 13:18:17.258525    9087 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:17.258531    9087 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:17.260438    9087 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:17.263525    9087 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:17.270683    9087 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:17.270702    9087 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0530 13:18:17.270706    9087 start_flags.go:319] config:
	{Name:kubenet-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:17.270790    9087 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:17.280649    9087 out.go:177] * Starting control plane node kubenet-013000 in cluster kubenet-013000
	I0530 13:18:17.284334    9087 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:17.284363    9087 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:17.284387    9087 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:17.284453    9087 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:17.284459    9087 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:17.284523    9087 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubenet-013000/config.json ...
	I0530 13:18:17.284535    9087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/kubenet-013000/config.json: {Name:mk40179f4243c65ce0e52d6efb4f745949c7cd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:17.284746    9087 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:17.284761    9087 start.go:364] acquiring machines lock for kubenet-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:17.284792    9087 start.go:368] acquired machines lock for "kubenet-013000" in 25.625µs
	I0530 13:18:17.284807    9087 start.go:93] Provisioning new machine with config: &{Name:kubenet-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:17.284844    9087 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:17.293384    9087 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0530 13:18:17.310052    9087 start.go:159] libmachine.API.Create for "kubenet-013000" (driver="qemu2")
	I0530 13:18:17.310072    9087 client.go:168] LocalClient.Create starting
	I0530 13:18:17.310138    9087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:17.310158    9087 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:17.310168    9087 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:17.310208    9087 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:17.310222    9087 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:17.310228    9087 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:17.310750    9087 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:17.426951    9087 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:17.517479    9087 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:17.517484    9087 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:17.517635    9087 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:17.526129    9087 main.go:141] libmachine: STDOUT: 
	I0530 13:18:17.526142    9087 main.go:141] libmachine: STDERR: 
	I0530 13:18:17.526186    9087 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2 +20000M
	I0530 13:18:17.533350    9087 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:17.533368    9087 main.go:141] libmachine: STDERR: 
	I0530 13:18:17.533382    9087 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:17.533389    9087 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:17.533433    9087 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b8:b0:1f:fc:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/kubenet-013000/disk.qcow2
	I0530 13:18:17.534943    9087 main.go:141] libmachine: STDOUT: 
	I0530 13:18:17.534957    9087 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:17.534976    9087 client.go:171] LocalClient.Create took 224.90475ms
	I0530 13:18:19.537089    9087 start.go:128] duration metric: createHost completed in 2.252277875s
	I0530 13:18:19.537217    9087 start.go:83] releasing machines lock for "kubenet-013000", held for 2.252409167s
	W0530 13:18:19.537288    9087 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:19.547831    9087 out.go:177] * Deleting "kubenet-013000" in qemu2 ...
	W0530 13:18:19.571679    9087 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:19.571711    9087 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:24.573838    9087 start.go:364] acquiring machines lock for kubenet-013000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:24.574270    9087 start.go:368] acquired machines lock for "kubenet-013000" in 318.458µs
	I0530 13:18:24.574385    9087 start.go:93] Provisioning new machine with config: &{Name:kubenet-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-013000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:24.574647    9087 start.go:125] createHost starting for "" (driver="qemu2")
	
	* 
	* Profile "stopped-upgrade-314000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-314000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-212000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-212000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.093299125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-212000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-212000 in cluster old-k8s-version-212000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-212000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:25.641964    9126 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:25.642085    9126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:25.642088    9126 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:25.642091    9126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:25.642154    9126 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:25.643142    9126 out.go:303] Setting JSON to false
	I0530 13:18:25.658410    9126 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4676,"bootTime":1685473229,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:25.658464    9126 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:25.663325    9126 out.go:177] * [old-k8s-version-212000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:25.666360    9126 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:25.666406    9126 notify.go:220] Checking for updates...
	I0530 13:18:25.674242    9126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:25.677337    9126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:25.680305    9126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:25.683252    9126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:25.686296    9126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:25.689546    9126 config.go:182] Loaded profile config "kubenet-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:25.689611    9126 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:25.689633    9126 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:25.693291    9126 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:25.700333    9126 start.go:295] selected driver: qemu2
	I0530 13:18:25.700338    9126 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:25.700345    9126 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:25.702129    9126 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:25.705411    9126 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:25.708363    9126 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:25.708378    9126 cni.go:84] Creating CNI manager for ""
	I0530 13:18:25.708387    9126 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0530 13:18:25.708390    9126 start_flags.go:319] config:
	{Name:old-k8s-version-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-212000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:25.708460    9126 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:25.710463    9126 out.go:177] * Starting control plane node old-k8s-version-212000 in cluster old-k8s-version-212000
	I0530 13:18:25.718318    9126 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:18:25.718338    9126 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:18:25.718351    9126 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:25.718410    9126 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:25.718416    9126 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0530 13:18:25.718471    9126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/old-k8s-version-212000/config.json ...
	I0530 13:18:25.718483    9126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/old-k8s-version-212000/config.json: {Name:mk3b93fd1bbc45e7ee31325f81eb1bf3cee10042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:25.718702    9126 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:25.718715    9126 start.go:364] acquiring machines lock for old-k8s-version-212000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:26.947444    9126 start.go:368] acquired machines lock for "old-k8s-version-212000" in 1.228662125s
	I0530 13:18:26.947629    9126 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-212000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:26.947861    9126 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:26.956317    9126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:27.005000    9126 start.go:159] libmachine.API.Create for "old-k8s-version-212000" (driver="qemu2")
	I0530 13:18:27.005129    9126 client.go:168] LocalClient.Create starting
	I0530 13:18:27.005296    9126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:27.005340    9126 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:27.005363    9126 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:27.005436    9126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:27.005463    9126 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:27.005475    9126 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:27.006059    9126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:27.134548    9126 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:27.330238    9126 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:27.330246    9126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:27.330389    9126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:27.339022    9126 main.go:141] libmachine: STDOUT: 
	I0530 13:18:27.339041    9126 main.go:141] libmachine: STDERR: 
	I0530 13:18:27.339095    9126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2 +20000M
	I0530 13:18:27.347439    9126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:27.347461    9126 main.go:141] libmachine: STDERR: 
	I0530 13:18:27.347481    9126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:27.347488    9126 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:27.347523    9126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:90:e8:48:31:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:27.349287    9126 main.go:141] libmachine: STDOUT: 
	I0530 13:18:27.349300    9126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:27.349318    9126 client.go:171] LocalClient.Create took 344.188083ms
	I0530 13:18:29.351360    9126 start.go:128] duration metric: createHost completed in 2.403538s
	I0530 13:18:29.351378    9126 start.go:83] releasing machines lock for "old-k8s-version-212000", held for 2.403956792s
	W0530 13:18:29.351393    9126 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:29.362801    9126 out.go:177] * Deleting "old-k8s-version-212000" in qemu2 ...
	W0530 13:18:29.370661    9126 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:29.370672    9126 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:34.370902    9126 start.go:364] acquiring machines lock for old-k8s-version-212000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:34.371504    9126 start.go:368] acquired machines lock for "old-k8s-version-212000" in 478.541µs
	I0530 13:18:34.371650    9126 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-212000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:34.371919    9126 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:34.381018    9126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:34.428683    9126 start.go:159] libmachine.API.Create for "old-k8s-version-212000" (driver="qemu2")
	I0530 13:18:34.428728    9126 client.go:168] LocalClient.Create starting
	I0530 13:18:34.428874    9126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:34.428923    9126 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:34.428944    9126 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:34.429037    9126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:34.429085    9126 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:34.429102    9126 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:34.429644    9126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:34.558964    9126 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:34.649776    9126 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:34.649782    9126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:34.649953    9126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:34.658765    9126 main.go:141] libmachine: STDOUT: 
	I0530 13:18:34.658779    9126 main.go:141] libmachine: STDERR: 
	I0530 13:18:34.658833    9126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2 +20000M
	I0530 13:18:34.666331    9126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:34.666346    9126 main.go:141] libmachine: STDERR: 
	I0530 13:18:34.666357    9126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:34.666363    9126 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:34.666407    9126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:17:7d:53:6a:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:34.667923    9126 main.go:141] libmachine: STDOUT: 
	I0530 13:18:34.667937    9126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:34.667949    9126 client.go:171] LocalClient.Create took 239.221792ms
	I0530 13:18:36.668142    9126 start.go:128] duration metric: createHost completed in 2.296248042s
	I0530 13:18:36.668195    9126 start.go:83] releasing machines lock for "old-k8s-version-212000", held for 2.296716s
	W0530 13:18:36.668760    9126 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-212000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-212000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:36.683315    9126 out.go:177] 
	W0530 13:18:36.687540    9126 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:36.687571    9126 out.go:239] * 
	* 
	W0530 13:18:36.689484    9126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:36.699124    9126 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-212000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (49.809833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.880907041s)

                                                
                                                
-- stdout --
	* [no-preload-389000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-389000 in cluster no-preload-389000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-389000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:29.110342    9231 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:29.110477    9231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:29.110479    9231 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:29.110482    9231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:29.110554    9231 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:29.111557    9231 out.go:303] Setting JSON to false
	I0530 13:18:29.126754    9231 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4680,"bootTime":1685473229,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:29.126814    9231 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:29.131640    9231 out.go:177] * [no-preload-389000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:29.138506    9231 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:29.138548    9231 notify.go:220] Checking for updates...
	I0530 13:18:29.145464    9231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:29.148454    9231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:29.151509    9231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:29.154407    9231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:29.157493    9231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:29.160777    9231 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:29.160843    9231 config.go:182] Loaded profile config "old-k8s-version-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0530 13:18:29.160862    9231 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:29.164425    9231 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:29.171445    9231 start.go:295] selected driver: qemu2
	I0530 13:18:29.171451    9231 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:29.171457    9231 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:29.173320    9231 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:29.174745    9231 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:29.177560    9231 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:29.177581    9231 cni.go:84] Creating CNI manager for ""
	I0530 13:18:29.177589    9231 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:18:29.177601    9231 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:18:29.177607    9231 start_flags.go:319] config:
	{Name:no-preload-389000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-389000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:29.177684    9231 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.186438    9231 out.go:177] * Starting control plane node no-preload-389000 in cluster no-preload-389000
	I0530 13:18:29.190454    9231 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:29.190525    9231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/no-preload-389000/config.json ...
	I0530 13:18:29.190542    9231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/no-preload-389000/config.json: {Name:mk92b50113bac6f1ca0aab2f04aeaf9d2c50220f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:29.190545    9231 cache.go:107] acquiring lock: {Name:mk62b96d9ddb939b7d11307fd6bcb7d9cd1d8977 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190549    9231 cache.go:107] acquiring lock: {Name:mk1f7e1161855fb214230a0b223d520a4ca2b6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190562    9231 cache.go:107] acquiring lock: {Name:mk9c084f44b94373a5bcb065e390433d74dd59d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190629    9231 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0530 13:18:29.190638    9231 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.084µs
	I0530 13:18:29.190645    9231 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0530 13:18:29.190651    9231 cache.go:107] acquiring lock: {Name:mk9b8316b02c0e238fff31b320be99022c5a16c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190737    9231 cache.go:107] acquiring lock: {Name:mk971e29be0acffdd5f6c11158ac272bdd09d39a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190744    9231 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.2
	I0530 13:18:29.190723    9231 cache.go:107] acquiring lock: {Name:mkecb57067acaf097cf5474803840fa50c26b736 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190767    9231 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.2
	I0530 13:18:29.190800    9231 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:29.191029    9231 start.go:364] acquiring machines lock for no-preload-389000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:29.191105    9231 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0530 13:18:29.191171    9231 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0530 13:18:29.191198    9231 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0530 13:18:29.191233    9231 cache.go:107] acquiring lock: {Name:mk1d8690d0342142db588633980b8de082503711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.190768    9231 cache.go:107] acquiring lock: {Name:mk0e8c28ec88a058505243b4337b811c4f4babfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:29.191330    9231 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.2
	I0530 13:18:29.191411    9231 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0530 13:18:29.209656    9231 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.2
	I0530 13:18:29.211011    9231 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0530 13:18:29.213317    9231 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0530 13:18:29.213342    9231 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0530 13:18:29.213762    9231 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.2
	I0530 13:18:29.214902    9231 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.2
	I0530 13:18:29.215132    9231 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0530 13:18:29.351493    9231 start.go:368] acquired machines lock for "no-preload-389000" in 160.443416ms
	I0530 13:18:29.351538    9231 start.go:93] Provisioning new machine with config: &{Name:no-preload-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-389000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:29.351617    9231 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:29.358799    9231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:29.373498    9231 start.go:159] libmachine.API.Create for "no-preload-389000" (driver="qemu2")
	I0530 13:18:29.373526    9231 client.go:168] LocalClient.Create starting
	I0530 13:18:29.373594    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:29.373616    9231 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:29.373626    9231 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:29.373668    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:29.373683    9231 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:29.373690    9231 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:29.376081    9231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:29.497263    9231 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:29.530063    9231 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:29.530074    9231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:29.530247    9231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:29.539059    9231 main.go:141] libmachine: STDOUT: 
	I0530 13:18:29.539083    9231 main.go:141] libmachine: STDERR: 
	I0530 13:18:29.539146    9231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2 +20000M
	I0530 13:18:29.547476    9231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:29.547492    9231 main.go:141] libmachine: STDERR: 
	I0530 13:18:29.547516    9231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:29.547523    9231 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:29.547562    9231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:ce:58:ce:72:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:29.549476    9231 main.go:141] libmachine: STDOUT: 
	I0530 13:18:29.549511    9231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:29.549539    9231 client.go:171] LocalClient.Create took 176.011458ms
	I0530 13:18:30.482160    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2
	I0530 13:18:30.504139    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0530 13:18:30.510385    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2
	I0530 13:18:30.633084    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0530 13:18:30.689222    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2
	I0530 13:18:30.858341    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2
	I0530 13:18:31.058149    9231 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0530 13:18:31.143988    9231 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0530 13:18:31.144035    9231 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.953385292s
	I0530 13:18:31.144068    9231 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0530 13:18:31.549799    9231 start.go:128] duration metric: createHost completed in 2.198188917s
	I0530 13:18:31.549850    9231 start.go:83] releasing machines lock for "no-preload-389000", held for 2.198379958s
	W0530 13:18:31.549921    9231 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:31.564259    9231 out.go:177] * Deleting "no-preload-389000" in qemu2 ...
	W0530 13:18:31.585650    9231 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:31.585684    9231 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:32.046056    9231 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0530 13:18:32.046105    9231 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.855512125s
	I0530 13:18:32.046134    9231 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0530 13:18:33.018207    9231 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0530 13:18:33.018248    9231 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 3.827095875s
	I0530 13:18:33.018296    9231 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0530 13:18:34.061462    9231 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0530 13:18:34.061533    9231 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 4.871082542s
	I0530 13:18:34.061568    9231 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0530 13:18:34.323873    9231 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0530 13:18:34.323927    9231 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 5.133501667s
	I0530 13:18:34.323954    9231 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0530 13:18:35.523252    9231 cache.go:157] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0530 13:18:35.523296    9231 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 6.332723167s
	I0530 13:18:35.523322    9231 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0530 13:18:36.585776    9231 start.go:364] acquiring machines lock for no-preload-389000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:36.668288    9231 start.go:368] acquired machines lock for "no-preload-389000" in 82.426375ms
	I0530 13:18:36.668464    9231 start.go:93] Provisioning new machine with config: &{Name:no-preload-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-389000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:36.668712    9231 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:36.678367    9231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:36.724946    9231 start.go:159] libmachine.API.Create for "no-preload-389000" (driver="qemu2")
	I0530 13:18:36.724984    9231 client.go:168] LocalClient.Create starting
	I0530 13:18:36.725082    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:36.725132    9231 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:36.725166    9231 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:36.725239    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:36.725272    9231 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:36.725290    9231 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:36.725721    9231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:36.852982    9231 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:36.888685    9231 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:36.888693    9231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:36.888841    9231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:36.902372    9231 main.go:141] libmachine: STDOUT: 
	I0530 13:18:36.902387    9231 main.go:141] libmachine: STDERR: 
	I0530 13:18:36.902449    9231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2 +20000M
	I0530 13:18:36.914012    9231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:36.914031    9231 main.go:141] libmachine: STDERR: 
	I0530 13:18:36.914046    9231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:36.914054    9231 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:36.914104    9231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:72:dc:2e:ec:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:36.915860    9231 main.go:141] libmachine: STDOUT: 
	I0530 13:18:36.915875    9231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:36.915887    9231 client.go:171] LocalClient.Create took 190.902166ms
	I0530 13:18:38.916773    9231 start.go:128] duration metric: createHost completed in 2.248025917s
	I0530 13:18:38.916830    9231 start.go:83] releasing machines lock for "no-preload-389000", held for 2.248568666s
	W0530 13:18:38.917245    9231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:38.926065    9231 out.go:177] 
	W0530 13:18:38.938277    9231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:38.938302    9231 out.go:239] * 
	* 
	W0530 13:18:38.941084    9231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:38.950116    9231 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (62.905708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-212000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-212000 create -f testdata/busybox.yaml: exit status 1 (29.990083ms)

                                                
                                                
** stderr ** 
	W0530 13:18:36.789977    9345 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "old-k8s-version-212000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-212000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (31.881167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (32.901542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-212000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-212000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-212000 describe deploy/metrics-server -n kube-system: exit status 1 (25.66375ms)

                                                
                                                
** stderr ** 
	W0530 13:18:36.939307    9355 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "old-k8s-version-212000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-212000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (27.360334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-212000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-212000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.899637792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-212000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-212000 in cluster old-k8s-version-212000
	* Restarting existing qemu2 VM for "old-k8s-version-212000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-212000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:37.141356    9364 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:37.141488    9364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:37.141491    9364 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:37.141493    9364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:37.141559    9364 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:37.142596    9364 out.go:303] Setting JSON to false
	I0530 13:18:37.158068    9364 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4688,"bootTime":1685473229,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:37.158144    9364 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:37.166839    9364 out.go:177] * [old-k8s-version-212000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:37.172755    9364 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:37.169949    9364 notify.go:220] Checking for updates...
	I0530 13:18:37.179789    9364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:37.182811    9364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:37.189644    9364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:37.197806    9364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:37.200721    9364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:37.204000    9364 config.go:182] Loaded profile config "old-k8s-version-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0530 13:18:37.207784    9364 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0530 13:18:37.210769    9364 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:37.214789    9364 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:18:37.220740    9364 start.go:295] selected driver: qemu2
	I0530 13:18:37.220745    9364 start.go:870] validating driver "qemu2" against &{Name:old-k8s-version-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-212000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:37.220802    9364 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:37.222683    9364 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:37.222704    9364 cni.go:84] Creating CNI manager for ""
	I0530 13:18:37.222714    9364 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0530 13:18:37.222720    9364 start_flags.go:319] config:
	{Name:old-k8s-version-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-212000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:37.222802    9364 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:37.230595    9364 out.go:177] * Starting control plane node old-k8s-version-212000 in cluster old-k8s-version-212000
	I0530 13:18:37.234781    9364 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:18:37.234806    9364 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:18:37.234830    9364 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:37.234894    9364 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:37.234900    9364 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0530 13:18:37.234974    9364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/old-k8s-version-212000/config.json ...
	I0530 13:18:37.235253    9364 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:37.235278    9364 start.go:364] acquiring machines lock for old-k8s-version-212000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:38.916985    9364 start.go:368] acquired machines lock for "old-k8s-version-212000" in 1.681679375s
	I0530 13:18:38.917201    9364 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:18:38.917223    9364 fix.go:55] fixHost starting: 
	I0530 13:18:38.917890    9364 fix.go:103] recreateIfNeeded on old-k8s-version-212000: state=Stopped err=<nil>
	W0530 13:18:38.917928    9364 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:18:38.934024    9364 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-212000" ...
	I0530 13:18:38.942237    9364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:17:7d:53:6a:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:38.952027    9364 main.go:141] libmachine: STDOUT: 
	I0530 13:18:38.952093    9364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:38.952245    9364 fix.go:57] fixHost completed within 35.021708ms
	I0530 13:18:38.952265    9364 start.go:83] releasing machines lock for "old-k8s-version-212000", held for 35.22ms
	W0530 13:18:38.952296    9364 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:38.952556    9364 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:38.952572    9364 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:43.953418    9364 start.go:364] acquiring machines lock for old-k8s-version-212000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:43.953861    9364 start.go:368] acquired machines lock for "old-k8s-version-212000" in 352.334µs
	I0530 13:18:43.954449    9364 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:18:43.954474    9364 fix.go:55] fixHost starting: 
	I0530 13:18:43.955251    9364 fix.go:103] recreateIfNeeded on old-k8s-version-212000: state=Stopped err=<nil>
	W0530 13:18:43.955280    9364 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:18:43.963735    9364 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-212000" ...
	I0530 13:18:43.967831    9364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:17:7d:53:6a:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/old-k8s-version-212000/disk.qcow2
	I0530 13:18:43.977136    9364 main.go:141] libmachine: STDOUT: 
	I0530 13:18:43.977191    9364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:43.977294    9364 fix.go:57] fixHost completed within 22.821125ms
	I0530 13:18:43.977314    9364 start.go:83] releasing machines lock for "old-k8s-version-212000", held for 23.43275ms
	W0530 13:18:43.977600    9364 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-212000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-212000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:43.984780    9364 out.go:177] 
	W0530 13:18:43.988941    9364 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:43.988984    9364 out.go:239] * 
	* 
	W0530 13:18:43.991855    9364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:44.001665    9364 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-212000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (64.87875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-389000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-389000 create -f testdata/busybox.yaml: exit status 1 (29.570708ms)

                                                
                                                
** stderr ** 
	W0530 13:18:39.061172    9374 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "no-preload-389000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-389000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (28.301083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (27.8425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-389000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-389000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-389000 describe deploy/metrics-server -n kube-system: exit status 1 (25.7435ms)

                                                
                                                
** stderr ** 
	W0530 13:18:39.198103    9381 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "no-preload-389000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-389000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (28.118292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.160054625s)

                                                
                                                
-- stdout --
	* [no-preload-389000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-389000 in cluster no-preload-389000
	* Restarting existing qemu2 VM for "no-preload-389000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-389000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:39.407338    9390 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:39.407438    9390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:39.407441    9390 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:39.407444    9390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:39.407514    9390 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:39.408446    9390 out.go:303] Setting JSON to false
	I0530 13:18:39.423448    9390 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4690,"bootTime":1685473229,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:39.423517    9390 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:39.431815    9390 out.go:177] * [no-preload-389000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:39.434842    9390 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:39.434929    9390 notify.go:220] Checking for updates...
	I0530 13:18:39.440848    9390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:39.443801    9390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:39.446854    9390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:39.449934    9390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:39.451270    9390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:39.454063    9390 config.go:182] Loaded profile config "no-preload-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:39.454284    9390 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:39.458817    9390 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:18:39.463845    9390 start.go:295] selected driver: qemu2
	I0530 13:18:39.463852    9390 start.go:870] validating driver "qemu2" against &{Name:no-preload-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-389000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:39.463920    9390 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:39.465697    9390 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:39.465716    9390 cni.go:84] Creating CNI manager for ""
	I0530 13:18:39.465726    9390 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:18:39.465731    9390 start_flags.go:319] config:
	{Name:no-preload-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-389000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:39.465785    9390 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.473811    9390 out.go:177] * Starting control plane node no-preload-389000 in cluster no-preload-389000
	I0530 13:18:39.477831    9390 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:39.477930    9390 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/no-preload-389000/config.json ...
	I0530 13:18:39.477995    9390 cache.go:107] acquiring lock: {Name:mk62b96d9ddb939b7d11307fd6bcb7d9cd1d8977 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.477995    9390 cache.go:107] acquiring lock: {Name:mk1f7e1161855fb214230a0b223d520a4ca2b6d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478010    9390 cache.go:107] acquiring lock: {Name:mk9c084f44b94373a5bcb065e390433d74dd59d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478063    9390 cache.go:107] acquiring lock: {Name:mk1d8690d0342142db588633980b8de082503711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478065    9390 cache.go:107] acquiring lock: {Name:mk971e29be0acffdd5f6c11158ac272bdd09d39a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478081    9390 cache.go:107] acquiring lock: {Name:mk0e8c28ec88a058505243b4337b811c4f4babfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478120    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0530 13:18:39.478129    9390 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 140.166µs
	I0530 13:18:39.478127    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0530 13:18:39.478149    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0530 13:18:39.478144    9390 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 158.917µs
	I0530 13:18:39.478167    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0530 13:18:39.478174    9390 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 180.834µs
	I0530 13:18:39.478178    9390 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0530 13:18:39.478168    9390 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0530 13:18:39.478161    9390 cache.go:107] acquiring lock: {Name:mk9b8316b02c0e238fff31b320be99022c5a16c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478168    9390 cache.go:107] acquiring lock: {Name:mkecb57067acaf097cf5474803840fa50c26b736 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:39.478150    9390 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0530 13:18:39.478186    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0530 13:18:39.478194    9390 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 96.125µs
	I0530 13:18:39.478258    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0530 13:18:39.478257    9390 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 185.042µs
	I0530 13:18:39.478264    9390 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 171.375µs
	I0530 13:18:39.478271    9390 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0530 13:18:39.478269    9390 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0530 13:18:39.478284    9390 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0530 13:18:39.478209    9390 cache.go:115] /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0530 13:18:39.478294    9390 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 230.375µs
	I0530 13:18:39.478296    9390 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:39.478286    9390 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0530 13:18:39.478315    9390 start.go:364] acquiring machines lock for no-preload-389000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:39.478298    9390 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0530 13:18:39.478341    9390 start.go:368] acquired machines lock for "no-preload-389000" in 20.667µs
	I0530 13:18:39.478351    9390 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:18:39.478355    9390 fix.go:55] fixHost starting: 
	I0530 13:18:39.478474    9390 fix.go:103] recreateIfNeeded on no-preload-389000: state=Stopped err=<nil>
	W0530 13:18:39.478481    9390 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:18:39.486797    9390 out.go:177] * Restarting existing qemu2 VM for "no-preload-389000" ...
	I0530 13:18:39.488244    9390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:72:dc:2e:ec:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:39.490233    9390 main.go:141] libmachine: STDOUT: 
	I0530 13:18:39.490247    9390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:39.490283    9390 fix.go:57] fixHost completed within 11.920666ms
	I0530 13:18:39.490289    9390 start.go:83] releasing machines lock for "no-preload-389000", held for 11.943666ms
	W0530 13:18:39.490295    9390 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:39.490373    9390 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:39.490378    9390 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:39.494179    9390 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0530 13:18:40.521845    9390 cache.go:162] opening:  /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0530 13:18:44.490514    9390 start.go:364] acquiring machines lock for no-preload-389000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:44.490566    9390 start.go:368] acquired machines lock for "no-preload-389000" in 37.917µs
	I0530 13:18:44.490589    9390 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:18:44.490593    9390 fix.go:55] fixHost starting: 
	I0530 13:18:44.490709    9390 fix.go:103] recreateIfNeeded on no-preload-389000: state=Stopped err=<nil>
	W0530 13:18:44.490713    9390 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:18:44.501519    9390 out.go:177] * Restarting existing qemu2 VM for "no-preload-389000" ...
	I0530 13:18:44.505700    9390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:72:dc:2e:ec:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/no-preload-389000/disk.qcow2
	I0530 13:18:44.507734    9390 main.go:141] libmachine: STDOUT: 
	I0530 13:18:44.507747    9390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:44.507765    9390 fix.go:57] fixHost completed within 17.172209ms
	I0530 13:18:44.507770    9390 start.go:83] releasing machines lock for "no-preload-389000", held for 17.200792ms
	W0530 13:18:44.507866    9390 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-389000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-389000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:44.515662    9390 out.go:177] 
	W0530 13:18:44.519760    9390 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:44.519775    9390 out.go:239] * 
	* 
	W0530 13:18:44.520262    9390 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:44.534636    9390 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-389000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (30.570375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-212000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (30.436208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-212000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-212000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-212000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.330334ms)

                                                
                                                
** stderr ** 
	W0530 13:18:44.138212    9415 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "old-k8s-version-212000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-212000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (27.742042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-212000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-212000 "sudo crictl images -o json": exit status 89 (39.321666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-212000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-212000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-212000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (27.454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-212000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-212000 --alsologtostderr -v=1: exit status 89 (39.163542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-212000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:44.260410    9422 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:44.260544    9422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:44.260546    9422 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:44.260549    9422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:44.260622    9422 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:44.260832    9422 out.go:303] Setting JSON to false
	I0530 13:18:44.260840    9422 mustload.go:65] Loading cluster: old-k8s-version-212000
	I0530 13:18:44.261025    9422 config.go:182] Loaded profile config "old-k8s-version-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0530 13:18:44.265642    9422 out.go:177] * The control plane node must be running for this command
	I0530 13:18:44.269853    9422 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-212000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-212000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (27.635709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (27.813041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-212000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-389000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (28.614958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-389000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-389000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-389000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.018833ms)

                                                
                                                
** stderr ** 
	W0530 13:18:44.627501    9445 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "no-preload-389000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-389000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (28.294166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-389000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-389000 "sudo crictl images -o json": exit status 89 (44.608875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-389000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-389000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-389000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (29.122833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-389000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-389000 --alsologtostderr -v=1: exit status 89 (40.542167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-389000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:44.757762    9457 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:44.757911    9457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:44.757914    9457 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:44.757917    9457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:44.757983    9457 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:44.758183    9457 out.go:303] Setting JSON to false
	I0530 13:18:44.758192    9457 mustload.go:65] Loading cluster: no-preload-389000
	I0530 13:18:44.758374    9457 config.go:182] Loaded profile config "no-preload-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:44.762646    9457 out.go:177] * The control plane node must be running for this command
	I0530 13:18:44.766851    9457 out.go:177]   To start a cluster, run: "minikube start -p no-preload-389000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-389000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (34.322625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (33.009875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-389000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-493000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-493000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.774641042s)

                                                
                                                
-- stdout --
	* [embed-certs-493000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-493000 in cluster embed-certs-493000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-493000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:44.776729    9458 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:44.776830    9458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:44.776833    9458 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:44.776835    9458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:44.776904    9458 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:44.777911    9458 out.go:303] Setting JSON to false
	I0530 13:18:44.795218    9458 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4695,"bootTime":1685473229,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:44.795271    9458 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:44.799680    9458 out.go:177] * [embed-certs-493000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:44.809746    9458 notify.go:220] Checking for updates...
	I0530 13:18:44.812653    9458 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:44.815702    9458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:44.818669    9458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:44.821645    9458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:44.824722    9458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:44.827664    9458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:44.830892    9458 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:44.831473    9458 config.go:182] Loaded profile config "no-preload-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:44.831684    9458 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:44.835706    9458 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:44.842639    9458 start.go:295] selected driver: qemu2
	I0530 13:18:44.842649    9458 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:44.842666    9458 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:44.844511    9458 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:44.848666    9458 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:44.849886    9458 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:44.849904    9458 cni.go:84] Creating CNI manager for ""
	I0530 13:18:44.849921    9458 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:18:44.849927    9458 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:18:44.849932    9458 start_flags.go:319] config:
	{Name:embed-certs-493000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-493000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:44.850013    9458 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:44.858663    9458 out.go:177] * Starting control plane node embed-certs-493000 in cluster embed-certs-493000
	I0530 13:18:44.862598    9458 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:44.862629    9458 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:44.862638    9458 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:44.862709    9458 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:44.862714    9458 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:44.862773    9458 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/embed-certs-493000/config.json ...
	I0530 13:18:44.862786    9458 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/embed-certs-493000/config.json: {Name:mk9d74b35f99665658ca0a786206d7ba8744318d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:44.862974    9458 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:44.862987    9458 start.go:364] acquiring machines lock for embed-certs-493000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:44.863010    9458 start.go:368] acquired machines lock for "embed-certs-493000" in 18.333µs
	I0530 13:18:44.863022    9458 start.go:93] Provisioning new machine with config: &{Name:embed-certs-493000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-493000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:44.863052    9458 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:44.871692    9458 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:44.886136    9458 start.go:159] libmachine.API.Create for "embed-certs-493000" (driver="qemu2")
	I0530 13:18:44.886164    9458 client.go:168] LocalClient.Create starting
	I0530 13:18:44.886239    9458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:44.886259    9458 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:44.886273    9458 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:44.886321    9458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:44.886338    9458 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:44.886345    9458 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:44.886722    9458 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:45.041597    9458 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:45.159481    9458 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:45.159490    9458 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:45.159650    9458 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:45.168334    9458 main.go:141] libmachine: STDOUT: 
	I0530 13:18:45.168349    9458 main.go:141] libmachine: STDERR: 
	I0530 13:18:45.168406    9458 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2 +20000M
	I0530 13:18:45.176632    9458 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:45.176657    9458 main.go:141] libmachine: STDERR: 
	I0530 13:18:45.176672    9458 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:45.176677    9458 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:45.176709    9458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:51:20:d8:d1:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:45.178535    9458 main.go:141] libmachine: STDOUT: 
	I0530 13:18:45.178548    9458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:45.178568    9458 client.go:171] LocalClient.Create took 292.405917ms
	I0530 13:18:47.180742    9458 start.go:128] duration metric: createHost completed in 2.317680375s
	I0530 13:18:47.180825    9458 start.go:83] releasing machines lock for "embed-certs-493000", held for 2.317851917s
	W0530 13:18:47.180884    9458 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:47.197038    9458 out.go:177] * Deleting "embed-certs-493000" in qemu2 ...
	W0530 13:18:47.212993    9458 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:47.213020    9458 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:52.215134    9458 start.go:364] acquiring machines lock for embed-certs-493000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:52.215614    9458 start.go:368] acquired machines lock for "embed-certs-493000" in 386.042µs
	I0530 13:18:52.215784    9458 start.go:93] Provisioning new machine with config: &{Name:embed-certs-493000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-493000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:52.216093    9458 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:52.222110    9458 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:52.268675    9458 start.go:159] libmachine.API.Create for "embed-certs-493000" (driver="qemu2")
	I0530 13:18:52.268730    9458 client.go:168] LocalClient.Create starting
	I0530 13:18:52.268853    9458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:52.268902    9458 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:52.268920    9458 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:52.268993    9458 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:52.269021    9458 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:52.269032    9458 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:52.269619    9458 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:52.397382    9458 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:52.465336    9458 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:52.465342    9458 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:52.465493    9458 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:52.474183    9458 main.go:141] libmachine: STDOUT: 
	I0530 13:18:52.474204    9458 main.go:141] libmachine: STDERR: 
	I0530 13:18:52.474258    9458 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2 +20000M
	I0530 13:18:52.481611    9458 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:52.481623    9458 main.go:141] libmachine: STDERR: 
	I0530 13:18:52.481634    9458 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:52.481639    9458 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:52.481682    9458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:78:75:71:00:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:52.483204    9458 main.go:141] libmachine: STDOUT: 
	I0530 13:18:52.483217    9458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:52.483231    9458 client.go:171] LocalClient.Create took 214.50025ms
	I0530 13:18:54.485325    9458 start.go:128] duration metric: createHost completed in 2.269263459s
	I0530 13:18:54.485396    9458 start.go:83] releasing machines lock for "embed-certs-493000", held for 2.269787167s
	W0530 13:18:54.485956    9458 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:54.501533    9458 out.go:177] 
	W0530 13:18:54.506781    9458 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:54.506809    9458 out.go:239] * 
	* 
	W0530 13:18:54.508892    9458 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:54.518491    9458 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-493000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (50.182833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-920000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-920000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (11.326230875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-920000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-920000 in cluster default-k8s-diff-port-920000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-920000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:45.553726    9501 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:45.553825    9501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:45.553828    9501 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:45.553831    9501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:45.553901    9501 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:45.554918    9501 out.go:303] Setting JSON to false
	I0530 13:18:45.570190    9501 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4696,"bootTime":1685473229,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:45.570260    9501 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:45.579028    9501 out.go:177] * [default-k8s-diff-port-920000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:45.583106    9501 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:45.583199    9501 notify.go:220] Checking for updates...
	I0530 13:18:45.589976    9501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:45.593037    9501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:45.595974    9501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:45.599008    9501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:45.602010    9501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:45.605307    9501 config.go:182] Loaded profile config "embed-certs-493000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:45.605373    9501 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:45.605396    9501 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:45.608959    9501 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:18:45.614936    9501 start.go:295] selected driver: qemu2
	I0530 13:18:45.614941    9501 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:18:45.614946    9501 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:45.616852    9501 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:18:45.619938    9501 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:18:45.623144    9501 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:45.623175    9501 cni.go:84] Creating CNI manager for ""
	I0530 13:18:45.623184    9501 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:18:45.623189    9501 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:18:45.623196    9501 start_flags.go:319] config:
	{Name:default-k8s-diff-port-920000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP:}
	I0530 13:18:45.623307    9501 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:45.631987    9501 out.go:177] * Starting control plane node default-k8s-diff-port-920000 in cluster default-k8s-diff-port-920000
	I0530 13:18:45.636019    9501 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:45.636042    9501 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:45.636056    9501 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:45.636119    9501 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:45.636124    9501 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:45.636190    9501 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/default-k8s-diff-port-920000/config.json ...
	I0530 13:18:45.636208    9501 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/default-k8s-diff-port-920000/config.json: {Name:mk59ff88198dc2f55da9445cbe21856fe2f27369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:18:45.636402    9501 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:45.636421    9501 start.go:364] acquiring machines lock for default-k8s-diff-port-920000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:47.180967    9501 start.go:368] acquired machines lock for "default-k8s-diff-port-920000" in 1.544551083s
	I0530 13:18:47.181105    9501 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-920000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:47.181362    9501 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:47.189171    9501 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:47.235182    9501 start.go:159] libmachine.API.Create for "default-k8s-diff-port-920000" (driver="qemu2")
	I0530 13:18:47.235220    9501 client.go:168] LocalClient.Create starting
	I0530 13:18:47.235355    9501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:47.235397    9501 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:47.235424    9501 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:47.235498    9501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:47.235527    9501 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:47.235541    9501 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:47.236226    9501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:47.363327    9501 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:47.406390    9501 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:47.406403    9501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:47.406597    9501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:47.415178    9501 main.go:141] libmachine: STDOUT: 
	I0530 13:18:47.415191    9501 main.go:141] libmachine: STDERR: 
	I0530 13:18:47.415262    9501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2 +20000M
	I0530 13:18:47.422403    9501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:47.422413    9501 main.go:141] libmachine: STDERR: 
	I0530 13:18:47.422431    9501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:47.422441    9501 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:47.422472    9501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:2c:f0:27:86:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:47.423978    9501 main.go:141] libmachine: STDOUT: 
	I0530 13:18:47.423989    9501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:47.424007    9501 client.go:171] LocalClient.Create took 188.784917ms
	I0530 13:18:49.426322    9501 start.go:128] duration metric: createHost completed in 2.244939125s
	I0530 13:18:49.426465    9501 start.go:83] releasing machines lock for "default-k8s-diff-port-920000", held for 2.245501667s
	W0530 13:18:49.426518    9501 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:49.435326    9501 out.go:177] * Deleting "default-k8s-diff-port-920000" in qemu2 ...
	W0530 13:18:49.457181    9501 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:49.457242    9501 start.go:702] Will try again in 5 seconds ...
	I0530 13:18:54.458400    9501 start.go:364] acquiring machines lock for default-k8s-diff-port-920000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:54.485494    9501 start.go:368] acquired machines lock for "default-k8s-diff-port-920000" in 27.002791ms
	I0530 13:18:54.485678    9501 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-920000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:18:54.485940    9501 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:18:54.496609    9501 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:18:54.543462    9501 start.go:159] libmachine.API.Create for "default-k8s-diff-port-920000" (driver="qemu2")
	I0530 13:18:54.543508    9501 client.go:168] LocalClient.Create starting
	I0530 13:18:54.543645    9501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:18:54.543683    9501 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:54.543700    9501 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:54.543763    9501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:18:54.543790    9501 main.go:141] libmachine: Decoding PEM data...
	I0530 13:18:54.543806    9501 main.go:141] libmachine: Parsing certificate...
	I0530 13:18:54.544335    9501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:18:54.672038    9501 main.go:141] libmachine: Creating SSH key...
	I0530 13:18:54.788623    9501 main.go:141] libmachine: Creating Disk image...
	I0530 13:18:54.788632    9501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:18:54.788783    9501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:54.797829    9501 main.go:141] libmachine: STDOUT: 
	I0530 13:18:54.797855    9501 main.go:141] libmachine: STDERR: 
	I0530 13:18:54.797948    9501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2 +20000M
	I0530 13:18:54.806162    9501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:18:54.806190    9501 main.go:141] libmachine: STDERR: 
	I0530 13:18:54.806209    9501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:54.806216    9501 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:18:54.806273    9501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:0a:7b:55:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:54.807818    9501 main.go:141] libmachine: STDOUT: 
	I0530 13:18:54.807833    9501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:54.807850    9501 client.go:171] LocalClient.Create took 264.306959ms
	I0530 13:18:56.809994    9501 start.go:128] duration metric: createHost completed in 2.324036291s
	I0530 13:18:56.810073    9501 start.go:83] releasing machines lock for "default-k8s-diff-port-920000", held for 2.324600375s
	W0530 13:18:56.810661    9501 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-920000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-920000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:56.824202    9501 out.go:177] 
	W0530 13:18:56.827403    9501 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:56.827457    9501 out.go:239] * 
	* 
	W0530 13:18:56.830249    9501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:18:56.840176    9501 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-920000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (62.3825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-493000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-493000 create -f testdata/busybox.yaml: exit status 1 (30.467959ms)

                                                
                                                
** stderr ** 
	W0530 13:18:54.606391    9521 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "embed-certs-493000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-493000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (32.520708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (32.449042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-493000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-493000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-493000 describe deploy/metrics-server -n kube-system: exit status 1 (26.717792ms)

                                                
                                                
** stderr ** 
	W0530 13:18:54.755482    9528 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "embed-certs-493000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-493000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (28.538292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-493000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-493000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (6.966624667s)

                                                
                                                
-- stdout --
	* [embed-certs-493000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-493000 in cluster embed-certs-493000
	* Restarting existing qemu2 VM for "embed-certs-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:54.958352    9540 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:54.958707    9540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:54.958712    9540 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:54.958714    9540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:54.958818    9540 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:54.960273    9540 out.go:303] Setting JSON to false
	I0530 13:18:54.975745    9540 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4705,"bootTime":1685473229,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:54.975809    9540 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:54.980750    9540 out.go:177] * [embed-certs-493000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:54.987704    9540 notify.go:220] Checking for updates...
	I0530 13:18:54.991708    9540 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:54.994748    9540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:54.997763    9540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:55.005741    9540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:55.008788    9540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:55.011700    9540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:55.015089    9540 config.go:182] Loaded profile config "embed-certs-493000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:55.015359    9540 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:55.019564    9540 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:18:55.026697    9540 start.go:295] selected driver: qemu2
	I0530 13:18:55.026705    9540 start.go:870] validating driver "qemu2" against &{Name:embed-certs-493000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-493000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:55.026796    9540 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:55.028757    9540 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:55.028782    9540 cni.go:84] Creating CNI manager for ""
	I0530 13:18:55.028791    9540 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:18:55.028797    9540 start_flags.go:319] config:
	{Name:embed-certs-493000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-493000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:55.028875    9540 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:55.037736    9540 out.go:177] * Starting control plane node embed-certs-493000 in cluster embed-certs-493000
	I0530 13:18:55.041654    9540 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:55.041689    9540 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:55.041710    9540 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:55.041784    9540 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:55.041789    9540 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:55.041862    9540 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/embed-certs-493000/config.json ...
	I0530 13:18:55.042215    9540 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:55.042229    9540 start.go:364] acquiring machines lock for embed-certs-493000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:56.810201    9540 start.go:368] acquired machines lock for "embed-certs-493000" in 1.767957792s
	I0530 13:18:56.810394    9540 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:18:56.810418    9540 fix.go:55] fixHost starting: 
	I0530 13:18:56.811071    9540 fix.go:103] recreateIfNeeded on embed-certs-493000: state=Stopped err=<nil>
	W0530 13:18:56.811110    9540 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:18:56.824201    9540 out.go:177] * Restarting existing qemu2 VM for "embed-certs-493000" ...
	I0530 13:18:56.827424    9540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:78:75:71:00:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:18:56.837292    9540 main.go:141] libmachine: STDOUT: 
	I0530 13:18:56.837351    9540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:56.837462    9540 fix.go:57] fixHost completed within 27.046166ms
	I0530 13:18:56.837514    9540 start.go:83] releasing machines lock for "embed-certs-493000", held for 27.221ms
	W0530 13:18:56.837546    9540 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:56.837794    9540 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:56.837812    9540 start.go:702] Will try again in 5 seconds ...
	I0530 13:19:01.839978    9540 start.go:364] acquiring machines lock for embed-certs-493000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:19:01.840410    9540 start.go:368] acquired machines lock for "embed-certs-493000" in 325.416µs
	I0530 13:19:01.840569    9540 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:19:01.840589    9540 fix.go:55] fixHost starting: 
	I0530 13:19:01.841315    9540 fix.go:103] recreateIfNeeded on embed-certs-493000: state=Stopped err=<nil>
	W0530 13:19:01.841342    9540 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:19:01.851022    9540 out.go:177] * Restarting existing qemu2 VM for "embed-certs-493000" ...
	I0530 13:19:01.854367    9540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:78:75:71:00:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/embed-certs-493000/disk.qcow2
	I0530 13:19:01.863481    9540 main.go:141] libmachine: STDOUT: 
	I0530 13:19:01.863541    9540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:19:01.863647    9540 fix.go:57] fixHost completed within 23.059625ms
	I0530 13:19:01.863673    9540 start.go:83] releasing machines lock for "embed-certs-493000", held for 23.242125ms
	W0530 13:19:01.864085    9540 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-493000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-493000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:01.871031    9540 out.go:177] 
	W0530 13:19:01.875163    9540 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:19:01.875222    9540 out.go:239] * 
	* 
	W0530 13:19:01.877800    9540 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:19:01.886010    9540 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-493000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (69.961084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-920000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-920000 create -f testdata/busybox.yaml: exit status 1 (28.172334ms)

                                                
                                                
** stderr ** 
	W0530 13:18:56.947927    9548 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "default-k8s-diff-port-920000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-920000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (27.457916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (27.858833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-920000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-920000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-920000 describe deploy/metrics-server -n kube-system: exit status 1 (25.95275ms)

                                                
                                                
** stderr ** 
	W0530 13:18:57.082426    9555 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "default-k8s-diff-port-920000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-920000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (28.028125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-920000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-920000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.158726417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-920000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-920000 in cluster default-k8s-diff-port-920000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-920000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-920000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:18:57.286030    9564 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:18:57.286123    9564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:57.286125    9564 out.go:309] Setting ErrFile to fd 2...
	I0530 13:18:57.286127    9564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:18:57.286199    9564 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:18:57.287135    9564 out.go:303] Setting JSON to false
	I0530 13:18:57.302267    9564 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4708,"bootTime":1685473229,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:18:57.302325    9564 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:18:57.307513    9564 out.go:177] * [default-k8s-diff-port-920000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:18:57.314579    9564 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:18:57.314608    9564 notify.go:220] Checking for updates...
	I0530 13:18:57.321537    9564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:18:57.324570    9564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:18:57.327523    9564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:18:57.330532    9564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:18:57.333538    9564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:18:57.336670    9564 config.go:182] Loaded profile config "default-k8s-diff-port-920000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:18:57.336889    9564 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:18:57.341507    9564 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:18:57.347516    9564 start.go:295] selected driver: qemu2
	I0530 13:18:57.347524    9564 start.go:870] validating driver "qemu2" against &{Name:default-k8s-diff-port-920000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-920000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:57.347614    9564 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:18:57.349684    9564 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0530 13:18:57.349710    9564 cni.go:84] Creating CNI manager for ""
	I0530 13:18:57.349719    9564 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:18:57.349731    9564 start_flags.go:319] config:
	{Name:default-k8s-diff-port-920000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-9200
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:18:57.349801    9564 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:18:57.357575    9564 out.go:177] * Starting control plane node default-k8s-diff-port-920000 in cluster default-k8s-diff-port-920000
	I0530 13:18:57.361530    9564 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:18:57.361554    9564 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:18:57.361570    9564 cache.go:57] Caching tarball of preloaded images
	I0530 13:18:57.361634    9564 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:18:57.361641    9564 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:18:57.361715    9564 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/default-k8s-diff-port-920000/config.json ...
	I0530 13:18:57.362103    9564 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:18:57.362117    9564 start.go:364] acquiring machines lock for default-k8s-diff-port-920000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:18:57.362145    9564 start.go:368] acquired machines lock for "default-k8s-diff-port-920000" in 21.959µs
	I0530 13:18:57.362158    9564 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:18:57.362162    9564 fix.go:55] fixHost starting: 
	I0530 13:18:57.362292    9564 fix.go:103] recreateIfNeeded on default-k8s-diff-port-920000: state=Stopped err=<nil>
	W0530 13:18:57.362303    9564 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:18:57.370551    9564 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-920000" ...
	I0530 13:18:57.374563    9564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:0a:7b:55:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:18:57.376371    9564 main.go:141] libmachine: STDOUT: 
	I0530 13:18:57.376388    9564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:18:57.376419    9564 fix.go:57] fixHost completed within 14.257583ms
	I0530 13:18:57.376424    9564 start.go:83] releasing machines lock for "default-k8s-diff-port-920000", held for 14.27575ms
	W0530 13:18:57.376433    9564 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:18:57.376495    9564 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:18:57.376499    9564 start.go:702] Will try again in 5 seconds ...
	I0530 13:19:02.378393    9564 start.go:364] acquiring machines lock for default-k8s-diff-port-920000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:19:02.378461    9564 start.go:368] acquired machines lock for "default-k8s-diff-port-920000" in 51.083µs
	I0530 13:19:02.378480    9564 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:19:02.378485    9564 fix.go:55] fixHost starting: 
	I0530 13:19:02.378619    9564 fix.go:103] recreateIfNeeded on default-k8s-diff-port-920000: state=Stopped err=<nil>
	W0530 13:19:02.378624    9564 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:19:02.383244    9564 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-920000" ...
	I0530 13:19:02.387193    9564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1f:0a:7b:55:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/default-k8s-diff-port-920000/disk.qcow2
	I0530 13:19:02.388976    9564 main.go:141] libmachine: STDOUT: 
	I0530 13:19:02.388988    9564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:19:02.389007    9564 fix.go:57] fixHost completed within 10.521959ms
	I0530 13:19:02.389012    9564 start.go:83] releasing machines lock for "default-k8s-diff-port-920000", held for 10.545042ms
	W0530 13:19:02.389088    9564 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-920000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-920000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:02.396096    9564 out.go:177] 
	W0530 13:19:02.399176    9564 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:19:02.399186    9564 out.go:239] * 
	* 
	W0530 13:19:02.399660    9564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:19:02.412982    9564 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-920000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (28.152042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-493000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (31.018333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-493000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-493000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-493000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.748583ms)

                                                
                                                
** stderr ** 
	W0530 13:19:02.027457    9579 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "embed-certs-493000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-493000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (28.020875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-493000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-493000 "sudo crictl images -o json": exit status 89 (40.351708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-493000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-493000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-493000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (27.481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-493000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-493000 --alsologtostderr -v=1: exit status 89 (39.280208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-493000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:19:02.150874    9586 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:19:02.151025    9586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:02.151027    9586 out.go:309] Setting ErrFile to fd 2...
	I0530 13:19:02.151030    9586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:02.151098    9586 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:19:02.151297    9586 out.go:303] Setting JSON to false
	I0530 13:19:02.151306    9586 mustload.go:65] Loading cluster: embed-certs-493000
	I0530 13:19:02.151486    9586 config.go:182] Loaded profile config "embed-certs-493000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:19:02.155185    9586 out.go:177] * The control plane node must be running for this command
	I0530 13:19:02.159154    9586 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-493000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-493000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (28.268709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (27.496708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-920000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (29.431583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-920000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-920000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-920000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.740375ms)

                                                
                                                
** stderr ** 
	W0530 13:19:02.503601    9608 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
	error: context "default-k8s-diff-port-920000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-920000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (28.566042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-920000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-920000 "sudo crictl images -o json": exit status 89 (44.518917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-920000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-920000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-920000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (30.298292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-920000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-920000 --alsologtostderr -v=1: exit status 89 (38.260292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-920000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:19:02.635885    9621 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:19:02.636010    9621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:02.636013    9621 out.go:309] Setting ErrFile to fd 2...
	I0530 13:19:02.636016    9621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:02.636092    9621 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:19:02.636302    9621 out.go:303] Setting JSON to false
	I0530 13:19:02.636312    9621 mustload.go:65] Loading cluster: default-k8s-diff-port-920000
	I0530 13:19:02.636509    9621 config.go:182] Loaded profile config "default-k8s-diff-port-920000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:19:02.641182    9621 out.go:177] * The control plane node must be running for this command
	I0530 13:19:02.642701    9621 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-920000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-920000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (28.786917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (34.398917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-969000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-969000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.811332667s)

                                                
                                                
-- stdout --
	* [newest-cni-969000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-969000 in cluster newest-cni-969000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:19:02.658563    9623 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:19:02.658676    9623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:02.658680    9623 out.go:309] Setting ErrFile to fd 2...
	I0530 13:19:02.658682    9623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:02.658769    9623 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:19:02.660004    9623 out.go:303] Setting JSON to false
	I0530 13:19:02.677638    9623 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4713,"bootTime":1685473229,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:19:02.677709    9623 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:19:02.681087    9623 out.go:177] * [newest-cni-969000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:19:02.688239    9623 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:19:02.688268    9623 notify.go:220] Checking for updates...
	I0530 13:19:02.695152    9623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:19:02.698169    9623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:19:02.701052    9623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:19:02.710153    9623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:19:02.717127    9623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:19:02.720536    9623 config.go:182] Loaded profile config "default-k8s-diff-port-920000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:19:02.720597    9623 config.go:182] Loaded profile config "multinode-060000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:19:02.720621    9623 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:19:02.724057    9623 out.go:177] * Using the qemu2 driver based on user configuration
	I0530 13:19:02.731155    9623 start.go:295] selected driver: qemu2
	I0530 13:19:02.731163    9623 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:19:02.731169    9623 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:19:02.732823    9623 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0530 13:19:02.732849    9623 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0530 13:19:02.740170    9623 out.go:177] * Automatically selected the socket_vmnet network
	I0530 13:19:02.744280    9623 start_flags.go:934] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0530 13:19:02.744295    9623 cni.go:84] Creating CNI manager for ""
	I0530 13:19:02.744304    9623 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:19:02.744307    9623 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0530 13:19:02.744312    9623 start_flags.go:319] config:
	{Name:newest-cni-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:19:02.744397    9623 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:19:02.752132    9623 out.go:177] * Starting control plane node newest-cni-969000 in cluster newest-cni-969000
	I0530 13:19:02.755007    9623 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:19:02.755053    9623 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:19:02.755063    9623 cache.go:57] Caching tarball of preloaded images
	I0530 13:19:02.755139    9623 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:19:02.755145    9623 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:19:02.755215    9623 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/newest-cni-969000/config.json ...
	I0530 13:19:02.755227    9623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/newest-cni-969000/config.json: {Name:mk0a5c63c83b5208ca6d0bc06f8798a05f41cf8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:19:02.755463    9623 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:19:02.755478    9623 start.go:364] acquiring machines lock for newest-cni-969000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:19:02.755505    9623 start.go:368] acquired machines lock for "newest-cni-969000" in 18.125µs
	I0530 13:19:02.755517    9623 start.go:93] Provisioning new machine with config: &{Name:newest-cni-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:19:02.755549    9623 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:19:02.764107    9623 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:19:02.778290    9623 start.go:159] libmachine.API.Create for "newest-cni-969000" (driver="qemu2")
	I0530 13:19:02.778309    9623 client.go:168] LocalClient.Create starting
	I0530 13:19:02.778373    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:19:02.778404    9623 main.go:141] libmachine: Decoding PEM data...
	I0530 13:19:02.778414    9623 main.go:141] libmachine: Parsing certificate...
	I0530 13:19:02.778458    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:19:02.778472    9623 main.go:141] libmachine: Decoding PEM data...
	I0530 13:19:02.778479    9623 main.go:141] libmachine: Parsing certificate...
	I0530 13:19:02.778793    9623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:19:02.942254    9623 main.go:141] libmachine: Creating SSH key...
	I0530 13:19:03.000738    9623 main.go:141] libmachine: Creating Disk image...
	I0530 13:19:03.000749    9623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:19:03.000933    9623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:03.010011    9623 main.go:141] libmachine: STDOUT: 
	I0530 13:19:03.010038    9623 main.go:141] libmachine: STDERR: 
	I0530 13:19:03.010106    9623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2 +20000M
	I0530 13:19:03.018201    9623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:19:03.018219    9623 main.go:141] libmachine: STDERR: 
	I0530 13:19:03.018257    9623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:03.018267    9623 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:19:03.018314    9623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:47:a7:6a:1c:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:03.020194    9623 main.go:141] libmachine: STDOUT: 
	I0530 13:19:03.020208    9623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:19:03.020227    9623 client.go:171] LocalClient.Create took 241.91775ms
	I0530 13:19:05.022439    9623 start.go:128] duration metric: createHost completed in 2.26691125s
	I0530 13:19:05.022528    9623 start.go:83] releasing machines lock for "newest-cni-969000", held for 2.267033291s
	W0530 13:19:05.022580    9623 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:05.034658    9623 out.go:177] * Deleting "newest-cni-969000" in qemu2 ...
	W0530 13:19:05.054584    9623 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:05.054610    9623 start.go:702] Will try again in 5 seconds ...
	I0530 13:19:10.056709    9623 start.go:364] acquiring machines lock for newest-cni-969000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:19:10.057160    9623 start.go:368] acquired machines lock for "newest-cni-969000" in 346µs
	I0530 13:19:10.057280    9623 start.go:93] Provisioning new machine with config: &{Name:newest-cni-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0530 13:19:10.057579    9623 start.go:125] createHost starting for "" (driver="qemu2")
	I0530 13:19:10.067441    9623 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0530 13:19:10.114921    9623 start.go:159] libmachine.API.Create for "newest-cni-969000" (driver="qemu2")
	I0530 13:19:10.114971    9623 client.go:168] LocalClient.Create starting
	I0530 13:19:10.115087    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/ca.pem
	I0530 13:19:10.115135    9623 main.go:141] libmachine: Decoding PEM data...
	I0530 13:19:10.115170    9623 main.go:141] libmachine: Parsing certificate...
	I0530 13:19:10.115236    9623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16597-6175/.minikube/certs/cert.pem
	I0530 13:19:10.115271    9623 main.go:141] libmachine: Decoding PEM data...
	I0530 13:19:10.115285    9623 main.go:141] libmachine: Parsing certificate...
	I0530 13:19:10.115788    9623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso...
	I0530 13:19:10.244233    9623 main.go:141] libmachine: Creating SSH key...
	I0530 13:19:10.380418    9623 main.go:141] libmachine: Creating Disk image...
	I0530 13:19:10.380426    9623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0530 13:19:10.380608    9623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:10.389358    9623 main.go:141] libmachine: STDOUT: 
	I0530 13:19:10.389372    9623 main.go:141] libmachine: STDERR: 
	I0530 13:19:10.389430    9623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2 +20000M
	I0530 13:19:10.396642    9623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0530 13:19:10.396657    9623 main.go:141] libmachine: STDERR: 
	I0530 13:19:10.396682    9623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:10.396688    9623 main.go:141] libmachine: Starting QEMU VM...
	I0530 13:19:10.396723    9623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:eb:c6:7b:dc:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:10.398276    9623 main.go:141] libmachine: STDOUT: 
	I0530 13:19:10.398289    9623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:19:10.398302    9623 client.go:171] LocalClient.Create took 283.328708ms
	I0530 13:19:12.400472    9623 start.go:128] duration metric: createHost completed in 2.342920125s
	I0530 13:19:12.400519    9623 start.go:83] releasing machines lock for "newest-cni-969000", held for 2.343374625s
	W0530 13:19:12.401102    9623 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:12.412756    9623 out.go:177] 
	W0530 13:19:12.416694    9623 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:19:12.416716    9623 out.go:239] * 
	* 
	W0530 13:19:12.419381    9623 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:19:12.429656    9623 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-969000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000: exit status 7 (67.153125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-969000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-969000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.172445s)

                                                
                                                
-- stdout --
	* [newest-cni-969000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-969000 in cluster newest-cni-969000
	* Restarting existing qemu2 VM for "newest-cni-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:19:12.752036    9671 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:19:12.752146    9671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:12.752148    9671 out.go:309] Setting ErrFile to fd 2...
	I0530 13:19:12.752150    9671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:12.752220    9671 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:19:12.753195    9671 out.go:303] Setting JSON to false
	I0530 13:19:12.768368    9671 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4723,"bootTime":1685473229,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:19:12.768433    9671 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:19:12.777637    9671 out.go:177] * [newest-cni-969000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:19:12.781664    9671 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:19:12.781707    9671 notify.go:220] Checking for updates...
	I0530 13:19:12.787554    9671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:19:12.790599    9671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:19:12.792100    9671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:19:12.795642    9671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:19:12.798630    9671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:19:12.801907    9671 config.go:182] Loaded profile config "newest-cni-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:19:12.802132    9671 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:19:12.805569    9671 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:19:12.812574    9671 start.go:295] selected driver: qemu2
	I0530 13:19:12.812580    9671 start.go:870] validating driver "qemu2" against &{Name:newest-cni-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-969000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:19:12.812632    9671 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:19:12.814489    9671 start_flags.go:934] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0530 13:19:12.814510    9671 cni.go:84] Creating CNI manager for ""
	I0530 13:19:12.814520    9671 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:19:12.814526    9671 start_flags.go:319] config:
	{Name:newest-cni-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-969000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:19:12.814592    9671 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:19:12.822592    9671 out.go:177] * Starting control plane node newest-cni-969000 in cluster newest-cni-969000
	I0530 13:19:12.826623    9671 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:19:12.826647    9671 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:19:12.826660    9671 cache.go:57] Caching tarball of preloaded images
	I0530 13:19:12.826734    9671 preload.go:174] Found /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0530 13:19:12.826739    9671 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:19:12.826802    9671 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/newest-cni-969000/config.json ...
	I0530 13:19:12.827037    9671 cache.go:195] Successfully downloaded all kic artifacts
	I0530 13:19:12.827048    9671 start.go:364] acquiring machines lock for newest-cni-969000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:19:12.827073    9671 start.go:368] acquired machines lock for "newest-cni-969000" in 19.583µs
	I0530 13:19:12.827082    9671 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:19:12.827085    9671 fix.go:55] fixHost starting: 
	I0530 13:19:12.827187    9671 fix.go:103] recreateIfNeeded on newest-cni-969000: state=Stopped err=<nil>
	W0530 13:19:12.827195    9671 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:19:12.832591    9671 out.go:177] * Restarting existing qemu2 VM for "newest-cni-969000" ...
	I0530 13:19:12.836648    9671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:eb:c6:7b:dc:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:12.838440    9671 main.go:141] libmachine: STDOUT: 
	I0530 13:19:12.838458    9671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:19:12.838488    9671 fix.go:57] fixHost completed within 11.401958ms
	I0530 13:19:12.838493    9671 start.go:83] releasing machines lock for "newest-cni-969000", held for 11.416708ms
	W0530 13:19:12.838499    9671 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:19:12.838554    9671 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:12.838558    9671 start.go:702] Will try again in 5 seconds ...
	I0530 13:19:17.840697    9671 start.go:364] acquiring machines lock for newest-cni-969000: {Name:mk8d01343c4d85b8a88c92a4dd0939d29c7aa2d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0530 13:19:17.841205    9671 start.go:368] acquired machines lock for "newest-cni-969000" in 399.042µs
	I0530 13:19:17.841359    9671 start.go:96] Skipping create...Using existing machine configuration
	I0530 13:19:17.841379    9671 fix.go:55] fixHost starting: 
	I0530 13:19:17.842097    9671 fix.go:103] recreateIfNeeded on newest-cni-969000: state=Stopped err=<nil>
	W0530 13:19:17.842127    9671 fix.go:129] unexpected machine state, will restart: <nil>
	I0530 13:19:17.846673    9671 out.go:177] * Restarting existing qemu2 VM for "newest-cni-969000" ...
	I0530 13:19:17.851879    9671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:eb:c6:7b:dc:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16597-6175/.minikube/machines/newest-cni-969000/disk.qcow2
	I0530 13:19:17.861308    9671 main.go:141] libmachine: STDOUT: 
	I0530 13:19:17.861370    9671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0530 13:19:17.861474    9671 fix.go:57] fixHost completed within 20.095625ms
	I0530 13:19:17.861495    9671 start.go:83] releasing machines lock for "newest-cni-969000", held for 20.26825ms
	W0530 13:19:17.861823    9671 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0530 13:19:17.870649    9671 out.go:177] 
	W0530 13:19:17.873767    9671 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0530 13:19:17.873796    9671 out.go:239] * 
	* 
	W0530 13:19:17.876493    9671 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:19:17.885604    9671 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-969000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000: exit status 7 (70.782834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-969000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-969000 "sudo crictl images -o json": exit status 89 (45.210417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-969000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-969000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-969000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000: exit status 7 (28.840958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-969000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-969000 --alsologtostderr -v=1: exit status 89 (38.857375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:19:18.071699    9684 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:19:18.071836    9684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:18.071839    9684 out.go:309] Setting ErrFile to fd 2...
	I0530 13:19:18.071841    9684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:19:18.071904    9684 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:19:18.072111    9684 out.go:303] Setting JSON to false
	I0530 13:19:18.072119    9684 mustload.go:65] Loading cluster: newest-cni-969000
	I0530 13:19:18.072294    9684 config.go:182] Loaded profile config "newest-cni-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:19:18.076019    9684 out.go:177] * The control plane node must be running for this command
	I0530 13:19:18.079402    9684 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-969000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-969000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000: exit status 7 (28.782542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000: exit status 7 (29.636167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (75/236)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.27.2/json-events 12.35
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.27
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.27
19 TestBinaryMirror 0.34
29 TestHyperKitDriverInstallOrUpdate 7.99
33 TestErrorSpam/start 0.37
34 TestErrorSpam/status 0.09
35 TestErrorSpam/pause 0.11
36 TestErrorSpam/unpause 0.12
37 TestErrorSpam/stop 0.16
40 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/AuditLog 0
48 TestFunctional/serial/CacheCmd/cache/add_remote 3.71
49 TestFunctional/serial/CacheCmd/cache/add_local 1.1
50 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
51 TestFunctional/serial/CacheCmd/cache/list 0.03
54 TestFunctional/serial/CacheCmd/cache/delete 0.07
62 TestFunctional/parallel/ConfigCmd 0.21
64 TestFunctional/parallel/DryRun 0.27
65 TestFunctional/parallel/InternationalLanguage 0.11
71 TestFunctional/parallel/AddonsCmd 0.12
86 TestFunctional/parallel/License 0.37
87 TestFunctional/parallel/Version/short 0.04
94 TestFunctional/parallel/ImageCommands/Setup 2.1
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.28
118 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
119 TestFunctional/parallel/ProfileCmd/profile_list 0.11
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
128 TestFunctional/delete_addon-resizer_images 0.2
129 TestFunctional/delete_my-image_image 0.04
130 TestFunctional/delete_minikube_cached_images 0.04
139 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.05
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 0.04
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.35
171 TestMainNoArgs 0.03
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
233 TestNoKubernetes/serial/ProfileList 0.16
234 TestNoKubernetes/serial/Stop 0.06
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
254 TestStartStop/group/old-k8s-version/serial/Stop 0.06
255 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
259 TestStartStop/group/no-preload/serial/Stop 0.06
260 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
276 TestStartStop/group/embed-certs/serial/Stop 0.06
277 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
281 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
282 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
294 TestStartStop/group/newest-cni/serial/DeployApp 0
295 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
296 TestStartStop/group/newest-cni/serial/Stop 0.06
297 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
299 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
300 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-063000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-063000: exit status 85 (95.956375ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:04 PDT |          |
	|         | -p download-only-063000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 13:04:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 13:04:49.753461    6595 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:04:49.753570    6595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:04:49.753573    6595 out.go:309] Setting ErrFile to fd 2...
	I0530 13:04:49.753576    6595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:04:49.753642    6595 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	W0530 13:04:49.753707    6595 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16597-6175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16597-6175/.minikube/config/config.json: no such file or directory
	I0530 13:04:49.754911    6595 out.go:303] Setting JSON to true
	I0530 13:04:49.772777    6595 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3860,"bootTime":1685473229,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:04:49.772847    6595 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:04:49.777785    6595 out.go:97] [download-only-063000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:04:49.781950    6595 out.go:169] MINIKUBE_LOCATION=16597
	I0530 13:04:49.777924    6595 notify.go:220] Checking for updates...
	W0530 13:04:49.777932    6595 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball: no such file or directory
	I0530 13:04:49.788542    6595 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:04:49.796928    6595 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:04:49.799812    6595 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:04:49.802954    6595 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	W0530 13:04:49.807228    6595 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0530 13:04:49.807396    6595 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:04:49.810865    6595 out.go:97] Using the qemu2 driver based on user configuration
	I0530 13:04:49.810881    6595 start.go:295] selected driver: qemu2
	I0530 13:04:49.810894    6595 start.go:870] validating driver "qemu2" against <nil>
	I0530 13:04:49.810947    6595 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0530 13:04:49.813931    6595 out.go:169] Automatically selected the socket_vmnet network
	I0530 13:04:49.818949    6595 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0530 13:04:49.819033    6595 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0530 13:04:49.819062    6595 cni.go:84] Creating CNI manager for ""
	I0530 13:04:49.819090    6595 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0530 13:04:49.819095    6595 start_flags.go:319] config:
	{Name:download-only-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-063000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:04:49.819285    6595 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:04:49.823839    6595 out.go:97] Downloading VM boot image ...
	I0530 13:04:49.823879    6595 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/iso/arm64/minikube-v1.30.1-1684885329-16572-arm64.iso
	I0530 13:05:01.434780    6595 out.go:97] Starting control plane node download-only-063000 in cluster download-only-063000
	I0530 13:05:01.434811    6595 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:05:01.494208    6595 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:05:01.494283    6595 cache.go:57] Caching tarball of preloaded images
	I0530 13:05:01.495274    6595 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:05:01.499483    6595 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0530 13:05:01.499489    6595 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:01.628983    6595 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0530 13:05:11.758015    6595 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:11.758168    6595 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:12.402860    6595 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0530 13:05:12.403053    6595 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/download-only-063000/config.json ...
	I0530 13:05:12.403072    6595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/download-only-063000/config.json: {Name:mk75755e468446e335bbf12293cdade13be013e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0530 13:05:12.403317    6595 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0530 13:05:12.404305    6595 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0530 13:05:12.846683    6595 out.go:169] 
	W0530 13:05:12.850350    6595 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107992378 0x107992378 0x107992378 0x107992378 0x107992378 0x107992378 0x107992378] Decompressors:map[bz2:0x14000590918 gz:0x14000590970 tar:0x14000590920 tar.bz2:0x14000590930 tar.gz:0x14000590940 tar.xz:0x14000590950 tar.zst:0x14000590960 tbz2:0x14000590930 tgz:0x14000590940 txz:0x14000590950 tzst:0x14000590960 xz:0x14000590978 zip:0x14000590980 zst:0x14000590990] Getters:map[file:0x14001188580 http:0x140009eea00 https:0x140009eea50] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0530 13:05:12.850384    6595 out_reason.go:110] 
	W0530 13:05:12.858282    6595 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0530 13:05:12.862221    6595 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-063000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (12.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 : (12.354429375s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (12.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-063000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-063000: exit status 85 (77.241875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:04 PDT |          |
	|         | -p download-only-063000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-063000 | jenkins | v1.30.1 | 30 May 23 13:05 PDT |          |
	|         | -p download-only-063000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/30 13:05:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0530 13:05:13.055436    6606 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:05:13.055612    6606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:05:13.055615    6606 out.go:309] Setting ErrFile to fd 2...
	I0530 13:05:13.055617    6606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:05:13.055681    6606 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	W0530 13:05:13.055743    6606 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16597-6175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16597-6175/.minikube/config/config.json: no such file or directory
	I0530 13:05:13.056621    6606 out.go:303] Setting JSON to true
	I0530 13:05:13.071708    6606 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3884,"bootTime":1685473229,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:05:13.071781    6606 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:05:13.076267    6606 out.go:97] [download-only-063000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:05:13.080143    6606 out.go:169] MINIKUBE_LOCATION=16597
	I0530 13:05:13.076371    6606 notify.go:220] Checking for updates...
	I0530 13:05:13.086130    6606 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:05:13.089260    6606 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:05:13.092277    6606 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:05:13.093758    6606 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	W0530 13:05:13.100239    6606 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0530 13:05:13.100528    6606 config.go:182] Loaded profile config "download-only-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0530 13:05:13.100557    6606 start.go:778] api.Load failed for download-only-063000: filestore "download-only-063000": Docker machine "download-only-063000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0530 13:05:13.100580    6606 driver.go:375] Setting default libvirt URI to qemu:///system
	W0530 13:05:13.100590    6606 start.go:778] api.Load failed for download-only-063000: filestore "download-only-063000": Docker machine "download-only-063000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0530 13:05:13.104230    6606 out.go:97] Using the qemu2 driver based on existing profile
	I0530 13:05:13.104240    6606 start.go:295] selected driver: qemu2
	I0530 13:05:13.104243    6606 start.go:870] validating driver "qemu2" against &{Name:download-only-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-063000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:05:13.106024    6606 cni.go:84] Creating CNI manager for ""
	I0530 13:05:13.106039    6606 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0530 13:05:13.106048    6606 start_flags.go:319] config:
	{Name:download-only-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-063000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:05:13.106109    6606 iso.go:125] acquiring lock: {Name:mk0883f839043fc8aa686c29aff10881e18e3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0530 13:05:13.109236    6606 out.go:97] Starting control plane node download-only-063000 in cluster download-only-063000
	I0530 13:05:13.109242    6606 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:05:13.249715    6606 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:05:13.249752    6606 cache.go:57] Caching tarball of preloaded images
	I0530 13:05:13.250079    6606 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:05:13.254559    6606 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0530 13:05:13.254572    6606 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:13.398149    6606 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4?checksum=md5:4271952d77a401a4cbcfc4225771d46f -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0530 13:05:20.530983    6606 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:20.531112    6606 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0530 13:05:21.090639    6606 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0530 13:05:21.090713    6606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16597-6175/.minikube/profiles/download-only-063000/config.json ...
	I0530 13:05:21.090987    6606 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0530 13:05:21.091148    6606 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16597-6175/.minikube/cache/darwin/arm64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-063000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-063000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.27s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-577000 --alsologtostderr --binary-mirror http://127.0.0.1:50607 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-577000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-577000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.99s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status: exit status 7 (30.5945ms)

                                                
                                                
-- stdout --
	nospam-257000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status: exit status 7 (27.9625ms)

                                                
                                                
-- stdout --
	nospam-257000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status: exit status 7 (28.432375ms)

                                                
                                                
-- stdout --
	nospam-257000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause: exit status 89 (38.190833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-257000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause" failed: exit status 89
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause: exit status 89 (37.873875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-257000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause" failed: exit status 89
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause: exit status 89 (37.741ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-257000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 pause" failed: exit status 89
--- PASS: TestErrorSpam/pause (0.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause: exit status 89 (38.753458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-257000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause" failed: exit status 89
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause: exit status 89 (37.820166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-257000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause" failed: exit status 89
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause: exit status 89 (38.873584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-257000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 unpause" failed: exit status 89
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (0.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 stop
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-257000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-257000 stop
--- PASS: TestErrorSpam/stop (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16597-6175/.minikube/files/etc/test/nested/copy/6593/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-602000 cache add registry.k8s.io/pause:3.1: (1.275689333s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-602000 cache add registry.k8s.io/pause:3.3: (1.304628s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-602000 cache add registry.k8s.io/pause:latest: (1.123937375s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4084448679/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cache add minikube-local-cache-test:functional-602000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 cache delete minikube-local-cache-test:functional-602000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-602000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 config get cpus: exit status 14 (28.559958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 config get cpus: exit status 14 (27.971833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (157.280166ms)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:07:04.061284    7148 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:07:04.061492    7148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.061497    7148 out.go:309] Setting ErrFile to fd 2...
	I0530 13:07:04.061501    7148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.061605    7148 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:07:04.063172    7148 out.go:303] Setting JSON to false
	I0530 13:07:04.082847    7148 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3995,"bootTime":1685473229,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:07:04.082914    7148 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:07:04.088156    7148 out.go:177] * [functional-602000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	I0530 13:07:04.095188    7148 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:07:04.095182    7148 notify.go:220] Checking for updates...
	I0530 13:07:04.102135    7148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:07:04.105220    7148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:07:04.108156    7148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:07:04.109507    7148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:07:04.112137    7148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:07:04.115415    7148 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:07:04.115656    7148 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:07:04.119949    7148 out.go:177] * Using the qemu2 driver based on existing profile
	I0530 13:07:04.127140    7148 start.go:295] selected driver: qemu2
	I0530 13:07:04.127146    7148 start.go:870] validating driver "qemu2" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:07:04.127187    7148 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:07:04.133043    7148 out.go:177] 
	W0530 13:07:04.137106    7148 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0530 13:07:04.141095    7148 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-602000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (106.697791ms)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0530 13:07:04.286097    7158 out.go:296] Setting OutFile to fd 1 ...
	I0530 13:07:04.286212    7158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.286215    7158 out.go:309] Setting ErrFile to fd 2...
	I0530 13:07:04.286218    7158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0530 13:07:04.286296    7158 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16597-6175/.minikube/bin
	I0530 13:07:04.287597    7158 out.go:303] Setting JSON to false
	I0530 13:07:04.303537    7158 start.go:125] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3995,"bootTime":1685473229,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3.1","kernelVersion":"22.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0530 13:07:04.303615    7158 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0530 13:07:04.308221    7158 out.go:177] * [functional-602000] minikube v1.30.1 sur Darwin 13.3.1 (arm64)
	I0530 13:07:04.315177    7158 out.go:177]   - MINIKUBE_LOCATION=16597
	I0530 13:07:04.315225    7158 notify.go:220] Checking for updates...
	I0530 13:07:04.319179    7158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	I0530 13:07:04.322085    7158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0530 13:07:04.325181    7158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0530 13:07:04.328181    7158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	I0530 13:07:04.331071    7158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0530 13:07:04.334432    7158 config.go:182] Loaded profile config "functional-602000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0530 13:07:04.334650    7158 driver.go:375] Setting default libvirt URI to qemu:///system
	I0530 13:07:04.339097    7158 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0530 13:07:04.346142    7158 start.go:295] selected driver: qemu2
	I0530 13:07:04.346148    7158 start.go:870] validating driver "qemu2" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-602000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0530 13:07:04.346201    7158 start.go:881] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0530 13:07:04.351046    7158 out.go:177] 
	W0530 13:07:04.355177    7158 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0530 13:07:04.359073    7158 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.033060416s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-602000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image rm gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 image save --daemon gcr.io/google-containers/addon-resizer:functional-602000 --alsologtostderr
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-602000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1313: Took "79.118958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1327: Took "32.734625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1364: Took "80.043167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1377: Took "31.747541ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012785542s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-602000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-602000
--- PASS: TestFunctional/delete_addon-resizer_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-602000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-602000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-948000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-487000 --output=json --user=testUser
--- PASS: TestJSONOutput/stop/Command (0.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-149000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-149000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.787375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6d58e57-f15c-4fbd-9f16-84e2a74e026f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-149000] minikube v1.30.1 on Darwin 13.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dddd5fac-e338-4418-a338-c275b875b12d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16597"}}
	{"specversion":"1.0","id":"b8eda05c-13fb-417f-82f5-ca6fd3a53242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig"}}
	{"specversion":"1.0","id":"707c71b0-8330-4e5b-a602-af310a027b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5db9ad1a-ec39-472d-961e-4630229bfea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3872df5a-6ca8-43a9-a5a0-7c6eb11da9d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube"}}
	{"specversion":"1.0","id":"8ac8653b-d072-47c3-b5c2-0a363abf46d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9f76f877-81f7-4110-849f-20fc7b78e331","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-149000
--- PASS: TestErrorJSONOutput (0.35s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-040000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (103.637041ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-040000] minikube v1.30.1 on Darwin 13.3.1 (arm64)
	  - MINIKUBE_LOCATION=16597
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16597-6175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16597-6175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.288708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-040000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-040000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-040000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.403833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-040000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-212000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-212000 -n old-k8s-version-212000: exit status 7 (27.5895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-212000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-389000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-389000 -n no-preload-389000: exit status 7 (27.924625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-389000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-493000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-493000 -n embed-certs-493000: exit status 7 (27.841083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-493000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-920000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-920000 -n default-k8s-diff-port-920000: exit status 7 (28.363417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-920000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-969000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-969000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-969000 -n newest-cni-969000: exit status 7 (29.179125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-969000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/236)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1960573802/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1685477184304720000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1960573802/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1685477184304720000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1960573802/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1685477184304720000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1960573802/001/test-1685477184304720000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (49.797125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.301209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (83.568958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.122958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.234209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.019959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (84.853125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo umount -f /mount-9p": exit status 89 (44.62175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo umount -f /mount-9p\"": exit status 89
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1960573802/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1949615884/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (56.359875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (85.75675ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (85.576709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (82.319334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (84.184083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.680875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.013209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (84.451ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "sudo umount -f /mount-9p": exit status 89 (44.437625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-602000 ssh \"sudo umount -f /mount-9p\"": exit status 89
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1949615884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1412723745/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1412723745/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1412723745/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (79.919625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (84.29175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (85.878458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (82.304375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (82.781167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (84.6975ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (84.750875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-602000 ssh "findmnt -T" /mount1: exit status 89 (85.452208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-602000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1412723745/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1412723745/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-602000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1412723745/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.72s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-013000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
W0530 13:12:22.122331    7751 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
W0530 13:12:22.149661    7753 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
W0530 13:12:22.175645    7756 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
W0530 13:12:22.201978    7757 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
W0530 13:12:22.227992    7759 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
W0530 13:12:22.256813    7761 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
W0530 13:12:22.286106    7762 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
W0530 13:12:22.315502    7763 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
W0530 13:12:22.345115    7764 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
W0530 13:12:22.372188    7765 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
W0530 13:12:22.519570    7775 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
W0530 13:12:22.624369    7780 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
W0530 13:12:22.650443    7781 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
W0530 13:12:22.676615    7782 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
W0530 13:12:22.702428    7783 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
W0530 13:12:22.728419    7784 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
W0530 13:12:22.754399    7785 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
W0530 13:12:22.780122    7786 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
W0530 13:12:22.805910    7787 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
W0530 13:12:23.026952    7798 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
W0530 13:12:23.052959    7799 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
W0530 13:12:23.078973    7800 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
W0530 13:12:23.105051    7801 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
W0530 13:12:23.131401    7802 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
W0530 13:12:23.157120    7803 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
W0530 13:12:23.182977    7804 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
W0530 13:12:23.208855    7805 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
W0530 13:12:23.234694    7806 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
W0530 13:12:23.260498    7807 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
W0530 13:12:23.286250    7808 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
error: context "cilium-013000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
W0530 13:12:23.509592    7819 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
W0530 13:12:23.535455    7820 loader.go:223] Config not found: /Users/jenkins/minikube-integration/16597-6175/kubeconfig
Error in configuration: context was not found for specified context: cilium-013000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-013000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-013000"

                                                
                                                
----------------------- debugLogs end: cilium-013000 [took: 2.132515416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-013000
--- SKIP: TestNetworkPlugins/group/cilium (2.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-517000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-517000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
Copied to clipboard